The recent CII Journal article titled Evaluation of AI Liability Cover brings forward a critical issue: as artificial intelligence becomes embedded in business operations, the question of who holds liability when things go wrong is no longer theoretical. The article raises an important concern, but it lacks the depth and market grounding needed for industry professionals looking for practical insight.
What the article gets right
The piece is correct in pointing out that AI, particularly black-box models, introduces complex risks. When autonomous systems make decisions, and those decisions result in harm or loss, traditional liability models struggle to keep up. Developers, users, and data providers may all be partially responsible, but current insurance frameworks don’t clearly define where that responsibility lies.
The article also identifies the gap in underwriting data. Without sufficient claims history, and with no global standards on AI safety or behaviour, pricing the risk is a challenge. These issues are widely acknowledged in the industry, and the article helps put a spotlight on them.
What’s missing
There’s no mention of how the insurance market is currently handling AI risk. For example, some insurers are not offering standalone AI liability policies but are instead modifying existing products like cyber insurance or technology errors and omissions. That shift is already happening, and leaving it out weakens the practical value of the piece.
It also fails to mention the EU AI Act, which is set to introduce new obligations on developers and users of high-risk AI systems. These regulations are shaping underwriting discussions in Europe right now, while the UK government is taking a more flexible, principles-based approach. Ignoring these developments leaves a critical gap in the analysis.
Oversimplified view on product design
The suggestion that insurers need to “start developing” AI liability products makes it sound easier than it is. In reality, the industry is facing multiple structural challenges:
- Attribution of fault in AI systems is difficult, especially when outputs are probabilistic and constantly changing
- Legal definitions of harm from AI outcomes remain vague in most jurisdictions
- There is little consensus on what constitutes reasonable oversight for autonomous tools
These are major reasons why most insurers are still in the exploratory phase rather than actively launching new policies.
The capital conversation is missing
Another crucial factor is reinsurance. AI liability presents long-tail, low-frequency, high-severity risks with unknown accumulation potential. Reinsurers are cautious. Without their backing, insurers will hesitate to deploy large capacities or innovative products. This perspective is completely absent from the article, despite being central to how new covers are actually brought to market.
Final thought
The CII Journal article does well to raise awareness, but for professionals actively engaging with the future of AI and insurance, it falls short. The conversation needs to go further. We need examples from the market, commentary from underwriters, insights from reinsurers, and updates on regulatory alignment.
The issue of AI liability isn’t just about coverage. It’s about trust, accountability, and readiness in a world where machines make decisions that humans once did.




Leave a comment