Takeaways:
- Insurers and insureds face ambiguity about whether certain risks are covered
- Commercial insurers have to decide what risks they're willing to cover
- Media liability, E&O and employment bias are among the affected risks
Silent AI can be a killer for commercial insurance.
There are a host of new questions about how insurers are covering or should cover risks caused by AI, as raised by insurance law experts who spoke at Zywave's Cyber Risk Insights in New York on October 29.

AI presents a variety of risks for commercial entities, including AI bias leading to discrimination in employment or housing, copyright infringement when AI is used to produce media materials, and mistakes made when AI is used for the discovery process in litigation, according to Christopher Suesing, partner at Wood Smith Henning & Berman LLP.
In commercial insurance coverages including media liability, errors and omissions (E&O), and employment practices liability, provisions about AI risks may not be explicitly stated, according to Kevin Casey, lead, cyber wordings and product innovation at QBE, a global insurer and reinsurer.

"When these policies were designed, maybe AI wasn't considered in the pricing. There's no affirmative numbers saying this is covered, and there's no exclusionary language saying it's not covered," he said. "It's just this ambiguity from the insurer's perspective – did we contemplate this risk when we priced it? Then, of course, the ambiguity for the ultimate consumer and the customer saying, 'I don't know if these risks are covered.'"
That means commercial insurers have to decide what AI risks to affirmatively cover in cyber policies or other specialized policies. "How do we model that exposure, and whether or not some of these exposures should be within the cyber policy?" asked Michelle Worrall, global director of insurance product at Resilience, a cyber risk consultancy.

In media liability insurance, for example, Worrall said, the use of AI to develop content can bring risks such as defamation, invasion of privacy and misappropriation of someone's likeness. This amounts to a "silent" exposure in this form of insurance, she explained.
Cyber insurers probably do not want to be exposed to the consequences of how AI may work with sets of data, according to Worrall. Media liability insurers are likely to start defining what their E&O provisions actually include regarding AI, such as emotional distress damages, she explained. Coverage for deceptive trade practices stemming from AI misrepresentations of products and services is "moral hazard," creating a question of whether that should be covered in insurance for AI activities, Worrall added.
"Insurance companies are taking a really close look at that insuring agreement, figuring out, we probably want to cover the large language model itself and copyright infringement arising from the training data sets," she said. "But the output that's then disseminated to the public? Maybe."






