AI-powered experiences are inevitable — here are key considerations for ensuring integrity and authenticity

Ajay Keni,

A recent open letter from artificial intelligence (AI) and technology experts called for an immediate and minimal six-month pause on the training of AI systems more powerful than GPT-4. The stated catalyst for this letter: “profound risks to society and humanity.” 

If history has taught us anything, it’s that emerging innovation is hard to contain once introduced into the wild. Consider the introduction of the automobile, or the internet, both of which had naysayers as well as early adopters. When standing on the brink of perceived progress, business opportunity, or political advantage, we have seldom paused as a global society. It likely won't happen now.

For one thing, such a pause would need to happen on a global scale to prevent certain nations from gaining advantage from a hiatus by others. Given the current geopolitical situation—given any geopolitical situation, really—it might not be realistic to expect this. For another thing, much of the research and development work happening in generative AI has already been made available through open-source models, making it even harder to contain. 

In a recent 60 Minutes interview, Google CEO Sundar Pichai said that society must adapt to make AI safe for the world. “It’s not for a company to decide,” he said. “This is why I think the development of this needs to include not just engineers but social scientists, ethicists, philosophers, and so on.” 

 

AI’s transformative potential 

The positive applications of generative AI hold historic promise to help people all over the world, from education and healthcare to climate change, food security, cyber security, and beyond. My question is not if, but like Google’s Pichai, how we should proceed to maximize the benefit and contain the risks of abuse or corrupt applications. 

According to the World Justice Project, for instance, 83 percent of survey respondents in India that reported legal issues were unable to obtain relevant “information, advice, or representation.” This is compounded by a massive backlog in the legal system, with the government last year having reported about 40 million cases pending in Indian courts. AI has the potential to transform this reality by helping people understand their rights, find relevant information, and locate legal assistance. For lawyers and the judiciary, AI can simplify complex legal research, thereby helping to massively reduce the backlog, with enormous social benefits as a result. Indeed, if the internet once held a redemptive promise of inclusivity and accessibility, it may well be that AI will help fulfill it. 

AI is already providing tools to mitigate the risks of today, as we adapt to tomorrow’s emerging assaults on identity and authenticity. Organizations are using AI to buttress security measures, with Microsoft having recently launched a product built on Chat GPT-4. Security Copilot is a cybersecurity tool that helps provide security administrators with a structured overview of possible threats to a company’s IT infrastructure, “at the speed and scale of AI.” Naturally, as with any AI-based tool, Security Copilot doesn’t always get everything right, and Microsoft takes pains to articulate what headlines often gloss over: "AI-generated content can contain mistakes.” 

This is an important acknowledgement, and one we'll come back to. 

 

Assessing the threats 

Now let’s talk about the “profound risks to society.”  

Foremost among them is the supercharged potential for fraud. With AI-based tools, it’s trivial for bad actors to create a complete and fully synthetic online identity. Couple this with well-publicized deepfake technology, and the danger is clear and present: It’s no longer easy to discern when it’s a fraudster calling you on the phone, or even via video conference. This is particularly dangerous in the context of the massive amount of PII (personal identifiable information) that has leaked, been harvested, or been stolen in recent years. If you are engaging in any sensitive conversations whatsoever, with anyone at all, this risk has to be top of mind.  

On a larger scale, generative AI-based systems have been used to generate other malicious content, ranging from expertly crafted phishing attacks to full-on propaganda and “fake news” campaigns. 

There are also dangers associated with more prosaic uses of generative AI. Increasingly, AI is being used experimentally in fields such as law, science, medicine, where the problem is that we don’t always know what we don’t know. 

Neither does AI, it turns out. ChatGPT’s developers have warned that the tool “sometimes writes plausible-sounding but incorrect or nonsensical answers.” This is in fact AI’s defining weakness: It doesn’t “know” right from wrong. Indeed, hallucinating bots are a well-known phenomenon.  

It’s risks such as these that highlight the dangers we’re really facing. Consider the use case where high-value digital agreements are predicated on the output of such hallucinations—such as AI-driven insurance contract analysis, or the legal T&Cs that might be generated by AI for inclusion in a mortgage or commercial real estate purchase. What do you do when a bot changes its mind? 

In the context of uncertainty around what is real and true, digital interactions must be secured in such a way that every step is unassailable and incontrovertible—from initial identification of the parties to distributed record-keeping of the interaction and its artifacts. In short, the idea is to secure the full potential of human-to-human interactions. 

Given that this potential can be amplified by AI, it really does make sense to put guardrails around the development of generative AI-based tools, rather than halting development outright. These guardrails, driven by regulation and auditing, should ultimately ensure that any downsides are far outweighed by the benefits. 

 

Recommended next steps  

As part of a plan to counter AI-based fraud, enterprises must focus on ID verification and continuous universal authentication to ensure the integrity of their interactions, transactions, and agreements.  

Enterprises should also be assessing specific vulnerabilities associated with their digital agreements and looking at solutions to monitor and detect associated threats. For instance, if digital agreements are being executed in video environments, identity proofing may need to be layered into the video environment. A similar approach would be recommended for other environments, such as wire transfers or mobile applications. 

Digipass CX personal security device
OneSpan's DIGIPASS CX personal security devices 

Furthermore, given the power of AI to "spoof” the authenticity of business agreements and transaction artifacts, and potentially “deep faking” everything from a business contract to a property deed, blockchain will have a crucial role to play when it comes to proving and defending identity. 

A final note related to protecting the integrity of digital agreements, in connection with an AI that may be liable to change its mind. The fundamental challenge we face is: if it tells you one thing one day and something different the next day, how do you know what's what?  

The answer comes down to the ability to store proof of what was originally communicated via secure vaulting measures. Whether that original communication was truthful or not, the ability to access non-repudiable evidence of what was done by whom, when, and how is going to be the new oil in an AI-driven world.

 

OneSpan Product Use Case Catalog
E-book

Use Case Catalog

Our solution portfolio supports secure, simple end-to-end experiences for your clients. Find out how.

Read now

Ajay Keni is OneSpan’s Chief Technology Officer. Ajay brings more than 20 years of experience as a technologist and software engineer leading teams to build world-class products and services focused on delivering an exceptional customer experience and security in the cloud.