AI Liability Is Forcing Insurance to Confront a New Kind of Risk

Table of Contents

Artificial intelligence is no longer a speculative technology sitting on the edge of enterprise operations. It is embedded in customer interactions, internal workflows, fraud detection, content generation, and decision-making systems across nearly every industry. As adoption accelerates, a new tension has emerged — not between innovation and regulation, but between AI-driven risk and the insurance mechanisms traditionally used to absorb it.

Insurers are being asked to underwrite something that does not behave like prior categories of risk. AI does not fail in a single, discrete moment. It misleads, misclassifies, impersonates, or quietly propagates errors at scale. The result is a liability landscape that is fragmented, ambiguous, and still very much under construction.

Recent real-world incidents illustrate why the insurance industry is paying attention. Air Canada was forced to honor a fare its chatbot wrongly promised. A deepfake video scam cost engineering firm Arup millions. Google now faces litigation after its AI Overviews feature incorrectly named a contractor as a defendant in a lawsuit, allegedly costing that company a customer. These are not edge cases. They are signals.

AI Deepfake Insurance Issues with Compliance

A Market Struggling to Categorize AI Harm

One of the central challenges insurers face is classification. Traditional policies rely on well-defined buckets: cyber, professional liability, general liability, errors and omissions. AI incidents, however, routinely span several at once.

As Thomas Bentz, a partner at Holland & Knight, put it:

“I think that there’s a lot of confusion and growth in this industry, because where these claims fit is still kind of being figured out.”

That uncertainty becomes more acute when AI outputs cause physical, financial, or reputational harm.

“So, for example, if an AI program causes bodily injury, does that fall under your (commercial general liability), your general liability type of coverage, or does that fall on your cyber coverage?”

The problem is not merely academic. Policy language was not written with probabilistic systems or generative outputs in mind.

“You have got some of these gaps that really don’t fit nicely into either program.”

From an underwriting perspective, those gaps translate into pricing difficulty, coverage hesitancy, and, in some cases, attempts to exclude AI-related claims altogether.

“And so I think the industry right now is trying to figure out, how do we deal with that? How do we price for it? How do we make sure that it fits nicely in that box where it’s supposed to go?”

This uncertainty is compounded by the relative youth of cyber insurance itself. Bentz noted that cyber coverage has only existed in a meaningful form for about two decades, and as a true enterprise risk product for roughly half that time. AI is arriving before insurers have even fully stabilized their cyber loss models.

Why Insurers Lack a Historical Playbook

Insurance is built on precedent. Loss history informs underwriting. Claims data informs exclusions, endorsements, and pricing. AI undermines that model by evolving faster than loss patterns can mature.

As Bentz explained, insurers need time and repetition to understand what policies were truly meant to cover, what should be added, and what should be carved out. AI’s rapid diffusion means that history simply has not caught up.

That does not mean insurers are ignoring AI risk. It means they are feeling their way forward.

What Insurers Are Quietly Looking For

Panos Leledakis, founder and CEO of the IFA Academy and a member of the National Association of Insurance and Financial Advisors, described an industry in exploration mode rather than execution mode.

“That said, I cannot confirm that AI governance is yet a standardized or decisive underwriting criterion across the industry. It appears to be directional rather than formalized at this stage.”

In practice, insurers are beginning to ask more detailed questions — even if they are not yet codified in application forms. According to Leledakis, these include:

  • Whether a company has baseline AI governance or usage policies
  • How data access and handling are controlled when AI systems are involved
  • Whether employees are trained on AI misuse, hallucinations, and social engineering risks

AI incidents, he noted, are increasingly part of internal risk discussions.

“While most insurers are not publicly labeling these as distinct ‘AI incidents’ yet, many are quietly treating them as extensions of cyber, fraud, professional liability, and errors and ommissions risk.”

This framing matters. It signals that AI is not being treated as a wholly separate risk class, but as a force multiplier across existing ones.

From Deepfakes to Chatbots: Where Losses Are Emerging

The most visible AI risks today fall into a few recurring patterns:

  • Chatbots providing incorrect or misleading information
  • Deepfake-enabled social engineering and fraud
  • Unintended collection or interception of communications
  • Reputational harm caused by false AI-generated statements

Coalition, a digital risk insurance provider, identified chatbots as an emerging risk area after analyzing nearly 200 cyber insurance claims and scanning thousands of websites.

“Chatbots were cited in 5% of all web privacy claims. These claims alleged that chatbot providers intercepted communications between the customer and the website owner without consent.”

These claims often rely on decades-old wiretapping statutes that were never designed with AI interfaces in mind.

“Plaintiffs’ attorneys have found repeatable strategies that make it easy to launch allegations of wrongful data collection, relying on the fact that everyday tools like tracking pixels, analytics scripts, and chatbots are deeply embedded across millions of websites.”

The Limits of Traditional Security Controls

Daniel Woods, a principal security researcher at Coalition, highlighted a crucial mismatch between traditional security models and AI-enabled threats.

Conventional cyber defenses focus on networks, endpoints, and access controls. Deepfake attacks, by contrast, exploit visibility rather than vulnerability.

“The way threat actors launch these attacks is they need something like 10 seconds of footage of a corporate leader speaking, or a video of them.”

That exposure is often unavoidable.

“You know, most businesses can’t avoid that. You need your corporate leaders to go out and speak for marketing purposes.”

Recognizing this reality, Coalition announced it would begin offering deepfake-related coverage, including forensic analysis, takedown assistance, and crisis communications.

As Woods explained, the trajectory is familiar:

“First these deep fakes were launched against politicians, then celebrities. And what we see is these trends tend to filter down from like high profile to lower, until it becomes like a mass market issue.”

What Organizations Can Do Now To Appease Cyber Insurers

While insurers continue to experiment, organizations are not powerless. In fact, proactive governance may influence underwriting outcomes sooner than formal standards do.

Six practical steps to reduce AI liability exposure

  1. Inventory where AI is actually used, including embedded vendor features.
  2. Distinguish low-risk internal assistance from high-impact external outputs.
  3. Implement human-in-the-loop review for customer-facing or consequential uses.
  4. Limit sensitive data exposure in prompts, logs, and training pipelines.
  5. Train employees on deepfake fraud, AI hallucinations, and misuse.
  6. Prepare incident response plans that account for non-breach AI failures.

These measures do not eliminate risk, but they demonstrate intent, maturity, and accountability — signals insurers increasingly value.

A Transitional Moment for the Insurance Industry

Leledakis described the current moment as transitional, with insurers testing boundaries and policyholders learning that AI changes more than just productivity.

Over the next one to two years, clearer policy language, endorsements, and underwriting expectations are likely to emerge. Until then, AI liability will remain a moving target — shaped as much by governance choices as by technology itself.

The organizations that recognize this early, and act accordingly, will be better positioned not only to manage AI risk, but to remain insurable as the landscape continues to evolve.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.