九色

Viewpoint: Deepfake Fraud Is On the Rise. Here’s How Insurers Can Respond

By Nicos Vekiarides | July 17, 2024

It is interesting鈥攁nd telling鈥攖hat one of the concerns around generative artificial intelligence has to this point largely revolved around copyright infringement. The most high-profile lawsuits around GenAI have so far centered on the idea that this technology will absorb the work of artists and writers without compensation and churn out passable replicas for pennies on the dollar.

But this wouldn’t be a concern if a consensus didn’t exist that this technology genuinely is powerful鈥攖hat it really can manufacture persuasively human-seeming texts and images. And while the copyright infringement implications of this matter, there are far more sinister implications to this technology that we need to reckon with鈥攑articularly for the insurance industry.

Put simply: insurance professionals cannot do their jobs if they cannot distinguish fact from fiction. And the rise of generative AI tools has guaranteed the blurring of those lines. The term “deepfake” entered popular consciousness long before the average person had heard of OpenAI, but it is only in recent years鈥攚ith the rise of consumer GenAI technology鈥攖hat these deepfakes have begun to pose a real threat.

Today, anyone can easily manufacture fraudulent imagery through text-to-photo or text-to-video generative AI platforms. Most people won’t鈥攂ut if there is a way to commit fraud, you can be sure some percentage of people will take advantage of it.

The implications here are profound and far-reaching. For insurance professionals, these deepfakes have the potential to wreak havoc on daily operations and lead to billions in lost revenue. Fighting back requires understanding the nature of the threat鈥攁nd how you can take proactive steps to prevent it.

Why deepfakes are so dangerous for the insurance industry

It is estimated that is lost annually to insurance fraud鈥攁 tally that amounts to a quarter of the entire industry’s value. Clearly, the insurance industry struggled to prevent fraud even before the rise of hyper-realistic, easily-generated synthetic media. And with the continued rise of back-end automation procedures, things are poised to get a lot worse.

The emerging paradigm for the insurance industry right now is self-service on the front-end and AI-facilitated automation on the back-end. Accordingly, are projected to be touchless by 2025. This paradigm has definite advantages for the insurance industry, which can now outsource repetitive work to the machines while focusing human ingenuity on more complex tasks. But the sad reality is that automation can very easily be turned against itself. What we are verging on is a situation in which images manipulated by AI tools will be waved through the system by AI tools鈥攍eading to incalculable losses along the way.

While I wrote about this very topic in 2022, prior to the widespread accessibility of generative AI frameworks, this is no longer hypothetical: already, fraudsters are onto “total loss” vehicles and reaping the insurance benefits. And GenAI has also opened the door to fabricated paperwork: in a matter of seconds, bad actors can now draw up fabricated invoices or underwriting appraisals replete with real-seeming signatures and letterhead.

It’s true that some degree of fraud is likely inevitable in any industry, but we are not talking about misbehavior on the margins. What we are confronted with is a total epistemological collapse, a helplessness on the part of insurers to assess the truth of any given situation. It’s an untenable situation鈥攂ut there is a solution.

Turning AI against itself: how AI can help detect fraud

As it happens, this very same technology can be deployed to combat fraudsters鈥攁nd restore a much-needed sense of certainty to the industry at large.

As we all now know, AI is nothing more or less than its underlying models. Accordingly, the very same mechanisms that allow AI to create fraudulent imagery allow it to detect fraudulent imagery. With the right AI models, insurers can automatically assess whether a given photograph or video is suspicious. Crucially, these processes can run automatically, in the background, meaning insurers can continue to reap the benefits of advanced automation technology鈥攚ithout opening the door to fraud.

As with other AI innovations, this kind of fraud detection involves close collaboration between systems and employees. If and when a claim is flagged as fraudulent, human employees can then evaluate the problem directly, aided in their decision-making by the information provided by AI. In effect, AI lays out its case for why it thinks the image or document in question is fraudulent鈥攆or instance, by drawing attention to identical images on the internet or to subtle but distinctive irregularities found in synthetically generated images. In this way, a reasonable determination can be quickly and efficiently reached.

Given the damage deepfakes have already caused, it is bracing to remember that this technology is in its relative infancy. And there is little doubt that, in the months and years to come, bad actors will attempt to wring every advantage they can out of each new development in GenAI’s evolution. Preventing them from doing so requires fighting fire with fire鈥攂ecause only cutting-edge tools can hope to combat cutting-edge fraud.

Topics Trends Carriers Fraud

Was this article valuable?

Here are more articles you may enjoy.