Cyber fraud based on artificial intelligence-generated audio or visual reproductions, dubbed “deepfakes,” poses an emerging threat to organizations but has yet to lead to significant increases in commercial insurance claims or losses, experts say.
The expertise and computing resources needed to deploy deepfakes make the technology a less compelling choice for criminals, they say.
London-based Arup Group, a multinational design and engineering company, said recently that it was the target of a deepfake scam that led to one of its Hong Kong employees paying out $25 million to fraudsters, and confirmed that fake voices and images were used, according to news reports.
“This is clearly a new potential threat vector, but I think there are some significant limitations,” such as not being enormously scalable, said Mike Rastigue, vice president for cyber risk management with Aspen Insurance Holdings Ltd. Such limitations have restricted the scope of claims and losses.
Despite these limitations, Mr. Rastigue recommends businesses and other organizations update their security awareness training to incorporate deepfakes. (see related story below)
“It’s certainly something that we are seeing more now than we (did) years ago, but I wouldn’t say that it’s so much more,” said Gwenn E. Cujdik, Exton, Pennsylvania-based manager-North America cyber incident response and cyber services for Axa XL, a unit of Axa SA. “It’s definitely on people’s minds because AI is on everybody’s minds.”
The expertise and technology required for deepfakes are a barrier to entry for widespread misuse, Ms. Cujdik said.
“Deepfakes are really something that require a level of sophistication, not just in the technology, but also the user. The technology right now is just not as user-friendly as people think it is,” she said.
Even as AI technology advances, the equipment required to deploy deepfakes will continue to pose an obstacle to some criminals, said Tiago Henriques, Zurich-based vice president of research for Coalition Inc.
“There’s still going to be a barrier on the computation piece because it still requires big (graphics processing units) to generate consistent video,” Mr. Henriques said. Criminals look at their “return on investment,” which does not yet justify using deepfake video technology in most cases, he said.
“Threat actors, unfortunately, are being successful enough with traditional methods that they don’t need to invest time and effort and energy into creating deepfakes,” said Raf Sanchez, London-based chief international officer for Beazley Security, part of Beazley PLC.
“We have not seen the proliferation of deepfakes the way we’ve seen ransomware in organizations around the globe. We’re not there,” said John Farley, New York-based managing director of Arthur J. Gallagher & Co.’s cyber practice.
Experts say video fakes often contain “tells,” features that betray their synthetic origins, such as displaying a hand with an inaccurate number of fingers or an arm missing a hand.
The technology does have the potential for misuse, Mr. Farley said.
“It’s something that’s technically here, and it will most likely become easier to access as technology evolves,” he said. “As technology evolves, these types of scams will most likely become easier to carry out.”
More people will be able to use these types of programs, and they will probably need less computing power to carry them out as well, he said.
Where criminals using AI are making headway is with “hyper-tuned” email phishing attacks and sometimes audio deepfakes, which are easier to perpetrate than video scams, Mr. Henriques said. “We see a higher quality of phishing, and we’re starting to see more scam calls using audio deepfakes as well,” he said.
With increased incidence and the pace of the technology’s evolution, deepfakes “should absolutely be part of the threat model” organizations use to evaluate risks and train employees, Mr. Henriques said.
“The big thing that they’re using large language models for in compromises is phishing. You can create a more realistic phishing email very quickly with AI,” said Mea Clift, St. Paul, Minnesota-based principal cyber risk engineer at Liberty Mutual Insurance Co.
“There is the capability for these deepfakes, especially video deepfakes and vocal deepfakes. We have not seen an extensive amount of it on the landscape yet,” Ms. Clift said. Given the threat capability, organizations should update cyber defense training, perhaps by simply identifying the most likely targets within a business for such an attack and making that group aware of the new technologies and potential exposures, Ms. Clift said.
“We’ve yet to see any material uptick in claims where deepfake leveraging generative AI technologies are concerned,” said Jaymin Kim, Toronto-based senior vice president, cyber risk practice, for Marsh LLC.
While the evolution of AI may further empower criminals, the technology is also being used to bolster cyber defenses and may generate countermeasures against the attacks, she said.
“It’s important to note that the same technology can and is being leveraged by the good actors as well,” to improve monitoring and detection systems to prevent fraud, for example, Ms. Kim said.
“AI technology presents an opportunity for the good actors to more efficiently detect and respond to vulnerabilities in ways that weren’t possible before,” Ms. Kim said.
Training key to spotting AI-created scams
Cyber defenses must keep pace with evolving threats such as deepfakes that use artificial intelligence, experts say.
Organizations should continually review and update employee cyber training, in a similar way to their response to ransomware threats, they say.
“Employee training has to evolve to help our employees understand what a deepfake is and how to recognize one,” said John Farley, New York-based managing director of Arthur J. Gallagher & Co.’s cyber practice.
“It’s important to do it now rather than wait for the claims to occur, because that’s typically the way our industry has operated. They pay a lot of losses, and then they realize that certain goals need to be implemented to either prevent or mitigate those losses. I would rather take the position that this is emerging now,” he said.
Organizations should be proactive in improving their controls “instead of waiting to see what might happen,” said Jaymin Kim, Toronto-based senior vice president, cyber risk practice, for Marsh LLC.
“Collectively, everyone should be including this in their security awareness training: that deepfakes are a real risk out there now, and we need to start training our employees to spot a deepfake the same way that we have trained them to spot ransomware,” said Mike Rastigue, vice president for cyber risk management with Aspen Insurance Holdings Ltd.