AI liability insurance is no longer a theoretical edge case. In March 2026, the market started showing a clear split: some insurers are adding explicit coverage for AI malfunctions and hallucinations, while others are inserting hard exclusions that deny AI-related claims outright. That is how a separate AI insurance market starts — not with a neat product category, but with legacy policies breaking in opposite directions.
The reporting around this shift is unusually consistent. An AFP-reported March 16 story carried by outlets including Malay Mail described insurers moving past the old “wait-and-see” posture. Founder Shield said some standard policies now contain “absolute AI exclusion” clauses, while it also built professional-services coverage that explicitly includes “AI malfunction and hallucination” events. In parallel, Armilla and Munich Re described dedicated underwriting approaches for AI-specific risks, with Munich Re arguing that hallucination risk can never be engineered away completely.
That lines up with legal and industry analysis from outside the news cycle. The National Law Review reported in 2025 that affirmative AI insurance products had already started emerging, including Armilla’s Lloyd’s-backed policy for hallucinations, model drift, and mechanical failures. More recently, the same publication highlighted Berkley’s “absolute” AI exclusion for liability lines, calling its scope hard to overstate. Beinsure separately reported US carrier filings that add generative AI exclusions into approved forms across multiple states. The market is not debating whether AI risk belongs in underwriting anymore. It is debating where that risk should sit.
Silent AI coverage is ending
For the last phase of enterprise AI adoption, many buyers operated in a fog. AI incidents might be covered under cyber, E&O, D&O, or professional liability policies, but the language was often indirect. That ambiguity is what insurance analysts call silent coverage: the risk may be there, but the policy was not written with AI front and center.
That middle ground is starting to disappear. Once insurers see a category of loss clearly enough, they do one of two things: price it explicitly or exclude it explicitly. AI has now reached that point.
This is a familiar insurance pattern. Cyber insurance followed a similar arc. At first, cyber losses were awkwardly absorbed into older policies. Then exclusions appeared. Then stand-alone cyber products matured into a real line of business. AI looks like it is taking the same road, just faster.
Why insurers are splitting now
The core problem is simple: AI systems can make costly mistakes without failing in a way traditional underwriting models recognize. A human employee can be trained, supervised, and assigned responsibility. A deterministic software system can often be tested against defined logic. A generative or agentic system can do something else entirely — produce convincing nonsense, execute the wrong task, or keep operating with statistical uncertainty baked in.
That makes AI hard to tuck neatly inside older policy language. If a model hallucinates a legal summary, recommends the wrong inventory order, fabricates customer service commitments, or misstates a compliance rule, where does that loss belong? Is it professional negligence, software failure, cyber risk, product liability, or something new?
Insurers taking the exclusion route are effectively saying: until we can model this properly, we do not want legacy policies accidentally picking it up. Insurers taking the coverage route are making the opposite bet: if AI risk is real and unavoidable, someone can underwrite it, test for it, and charge for it.
What the new AI insurance market looks like
The shape of the market is already visible.
One side is adding exclusions to existing policies. Berkley’s absolute AI exclusion is the bluntest example, but state filings described by Beinsure suggest broader movement. These endorsements give carriers a clean way to limit surprise exposure before claim patterns fully develop.
The other side is building affirmative AI products or endorsements. Founder Shield’s language around “AI malfunction and hallucination” coverage is notable because it treats AI failure as a named insurable event, not an accidental byproduct of some other coverage class. Armilla goes further by testing models and reviewing risk-management controls before binding coverage. Munich Re is underwriting both AI builders and AI users, which suggests the reinsurance layer sees this as a durable market, not a temporary novelty.
That matters because once reinsurers, specialist MGAs, and front-line brokers all start naming the same risk category, the market usually moves quickly. Buyers stop asking, “Is AI covered somewhere?” and start asking, “Which AI risks are covered, under what conditions, and what is excluded?”
The practical implication for companies using AI
If your business is deploying AI in customer-facing, operational, or decision-support workflows, this is a contract review issue now, not later. The dangerous assumption is that your existing stack of cyber, tech E&O, or general liability policies will quietly absorb AI losses. Some might. Some clearly will not. And the wording is moving fast enough that renewal season could change your position without much fanfare.
The harder truth is that better AI models do not eliminate the insurance question. Even as model quality improves, insurers are treating hallucination and malfunction as persistent commercial risks. That is the market telling you something useful: reliability improvements reduce frequency, but they do not erase severity.
The verdict is straightforward. AI insurance is breaking away from legacy coverage and becoming its own category. The split between affirmative coverage and absolute exclusion is not a temporary quirk — it is the earliest reliable sign that insurers now see AI as a standalone risk class worthy of its own pricing, underwriting, and carve-outs.
