Artificial intelligence (AI) is rapidly reshaping risk landscapes across virtually every sector, compelling insurers to reconsider how these emerging threats are underwritten. For many years, certain AI-related exposures were quietly absorbed under broader cyber, professional liability, or general insurance policies—a practice commonly referred to as “silent AI coverage.” Yet as AI technologies become ever more sophisticated and pervasive, this approach is increasingly seen as inadequate, if not potentially perilous.
A recent study by WTW, titled Insurance in the AI Age, authored by Dr Anat Leor and Sonal Madhok, highlights the growing urgency of this issue. The authors compare silent AI coverage to the early days of cyber insurance, when emerging digital risks were often included within conventional policies until specialist products were developed. In a similar vein, AI-related losses that fall outside traditional policy definitions can expose both insurers and policyholders to unexpected gaps, generating uncertainty and potential financial vulnerability.
To address these challenges, insurers are increasingly moving towards explicitly defined AI coverage. This includes the introduction of AI-specific endorsements, precise exclusion clauses, and the creation of standalone AI insurance products, particularly tailored for small and medium-sized enterprises. Large technology corporations, by contrast, often prefer self-insurance solutions due to the scale and complexity of their AI operations.
Despite this shift, many AI risks continue to intersect with conventional insurance lines. Standard cyber policies, for example, often exclude damage originating from an organisation’s own data, while general liability policies may omit purely financial losses. Consequently, policy renewals now require rigorous reassessment, particularly in relation to autonomous decision-making, algorithmic errors, and other AI-specific hazards.
Underwriting practices are also evolving. Insurers increasingly demand detailed disclosures regarding AI governance, human oversight, and internal controls. Priority is given to “human-in-the-loop” systems, ensuring that critical decisions remain subject to human review. Regulatory developments, including the European Union’s AI Act, are further shaping accountability frameworks and future coverage standards.
Dr Leor emphasises that clear policy language, robust governance structures, and enriched underwriting data are essential to mitigating uncertainty. Such measures are expected to bolster the resilience of the insurance sector, supporting organisations in embracing AI responsibly while managing the associated risks effectively.
The move away from silent coverage underscores a broader recognition within the industry: as AI continues to revolutionise business and society, traditional assumptions about risk must evolve to ensure protection keeps pace with technological innovation.
