How AI Is Changing Insurance in 2026

Advertising & Editorial Disclosure

Nine out of 10 U.S. insurers now use artificial intelligence in some part of their business. In the same year that full company-wide AI deployment jumped from 8% to 34%, consumer support for AI in property and casualty insurance fell nine points, from 29% to 20%, according to Insurity's 2025 survey. That tension between rapid corporate adoption and falling public trust defines the insurance market in 2026.

The AI-in-insurance market is worth an estimated $10 to $20 billion today, growing at a 32% to 37% compound annual rate. McKinsey projects AI could unlock $1.1 trillion in annual value for insurers. For policyholders, the effects are already concrete: claims that once took 10 days now close in 36 hours at AI-enabled carriers. Telematics programs have distributed more than $1.2 billion in premium discounts. Root Insurance, powered by AI pricing models, posted its first-ever annual profit in 2024 after years of losses.

Those same technologies power algorithms that denied hundreds of thousands of health claims in seconds, charged drivers in majority-Black communities 71% more for auto coverage and prompted a wave of lawsuits that federal courts allowed to proceed in 2025. Nearly one in three health insurers don't test their AI models for racial bias, according to the National Association of Insurance Commissioners. What AI means for your premiums, your claims and your rights depends on who's building the model, who's auditing it and where you live, and MoneyGeek's data on home and auto insurance burden by state shows that burden falls unevenly.

How Insurance Companies Use AI to Set Your Rates

AI now touches nearly every step of how insurers price your policy, from the initial quote to annual renewal. McKinsey projects that by 2030, more than 90% of pricing and underwriting for individual and small-business policies will be fully automated. That shift is already well underway. Ping An, the world's largest insurer by customers, delivers instant decisions on 93% of its life insurance applications. LLM adoption among U.S. insurers jumped from 18% to 63% in a single year, according to a 2025 Conning survey, yet only 7% have scaled AI across their entire enterprise.

For carriers that lean into AI-driven pricing, the financial results have been striking. Root Insurance built its entire underwriting model around AI and telematics data. After losing $147 million in 2023, the company posted a $30.9 million profit in 2024, its first profitable year, while premiums grew from roughly $600 million in 2022 to more than $1.3 billion in 2024. Across the insurance sector, McKinsey found that AI leaders generated 6.1 times the total shareholder return of AI laggards over five years, a wider performance gap than in most other industries.

Speed and profitability don't automatically mean fair pricing for consumers, though. Only 47% of insurers have deployed predictive modeling for risk evaluation, and the models that exist often rely on proxy variables (ZIP codes, credit scores, education levels) that correlate with race and income. A J.D. Power survey found that only 15% of consumers believe insurers should fully use AI to price policies. Another 33% said AI pricing should stay limited until bias and ethical concerns are addressed.

WHY FASTER UNDERWRITING DOESN'T ALWAYS MEAN FAIRER PRICING

The same data that speeds up your quote can embed discrimination. The Consumer Federation of America found that drivers in predominantly Black communities pay auto premiums averaging 71% higher than those in white communities, a pattern MoneyGeek's own study of communities of color and auto premiums confirmed across multiple states. In New York, drivers in predominantly non-white ZIP codes pay $1,728 more per year. AI models don't use race directly, but they ingest proxy variables (geography, credit history, occupation) that correlate with it. The result is a pricing system that can replicate historical discrimination without ever naming a protected class.

Colorado took direct aim at this problem. Under SB 21-169, the most aggressive state AI-in-insurance law in the country, insurers must inventory every algorithm used in pricing, test for discriminatory outcomes and submit annual compliance reports. The law expanded to cover auto and health insurance in October 2025. Most states still lack comparable requirements, and industry readiness is uneven: 92% of insurance workers say they want AI training, but only 4% of carriers invest in reskilling at scale.

Can AI Speed Up Your Insurance Claim, or Deny It Faster?

AI-powered claims processing is the most visible change for policyholders in 2026. Among the 82% of insurance companies using AI in claims, average processing time dropped from 10 days to 36 hours. BCG estimates that fully automated simple claims could resolve in real time for up to 70% of cases, cutting costs 30% to 50%. For consumers filing straightforward auto or property claims, the speed improvement is immediate and measurable.

Lemonade processes 55% of all claims through AI. The insurer's pet insurance line reduced its cost per claim by 68%, from $44 to $14. Aviva cut complex liability assessment time by 23 days and saved more than £60 million in 2024 from motor claims AI alone. Allstate uses GPT-based AI to draft roughly 50,000 daily claims communications, and Ping An handles 80% of its customer service (about 1.5 billion interactions per year) through AI systems.

Those gains in speed raise a parallel concern. J.D. Power found 47% of consumers are uncomfortable with AI processing their claims. MoneyGeek's analysis of ACA claim denial rates by insurer shows how widely those rates vary. Three federal lawsuits proceeding through U.S. courts show why that discomfort may be justified.

WHEN ALGORITHMS OVERRIDE DOCTORS' RECOMMENDATIONS

In Kisting-Leung v. Cigna, plaintiffs allege the insurer's PxDx algorithm auto-denied more than 300,000 claims in two months at an average of 1.2 seconds per denial. One Cigna medical director allegedly denied 60,000 claims in a single month. A federal judge in California issued a mixed ruling in March 2025, allowing key breach-of-fiduciary-duty claims to proceed while dismissing standing challenges for some plaintiffs.

The UnitedHealthcare class action centers on the nH Predict algorithm, which plaintiffs say has a 90% error rate based on how often denials are reversed on appeal. The estate of a deceased patient alleges UnitedHealthcare pressured employees to keep rehabilitation stays within 1% of the algorithm's predictions. A Minnesota federal court allowed breach-of-contract claims to move forward in February 2025, opening the door to discovery of internal documents about the algorithm's design.

California responded with SB 1120, effective January 2025, which prohibits health insurers from denying coverage based solely on an AI algorithm. Medical necessity decisions must involve a licensed physician. Attorney General Rob Bonta issued legal advisories in January 2025 warning health plans against AI-based denials that override doctor judgment. At least four other states (Arizona, Maryland, Nebraska, Texas) passed AI healthcare legislation in 2025. For a comparison of plans by denial rate and cost, see MoneyGeek's guide to the best health insurance plans.

Does AI in Insurance Discriminate by ZIP Code?

The evidence says yes, in many cases. As covered in the underwriting section above, drivers in predominantly Black communities pay auto premiums averaging 71% more than those in white communities, and drivers in non-white New York ZIP codes pay $1,728 more per year. The pattern isn't limited to the U.S. A 2025 European study of 12.4 million insurance observations found that AI models overcharged the poorest communities by 5.8% for life insurance and 7.2% for health coverage above fair-benchmark pricing.

The accountability gap is wide. The NAIC's 2025 health insurance survey found that nearly one-third of health insurers don't regularly test their AI models for bias. The American Academy of Actuaries published "Unmasking Hidden Bias" in 2025, detailing how behavioral data and proxy variables institutionalize discrimination in underwriting. Research from the University of New South Wales found that with AI and big data, some proxy variables are no longer easy for even actuaries to identify.

All three major AI discrimination lawsuits in insurance survived motions to dismiss in 2025. Huskey v. State Farm, filed in 2022, alleges the insurer's machine-learning fraud algorithms use racial proxies; the case is now in discovery. A new front opened in November 2025 when Los Angeles County launched a civil investigation into State Farm's use of AI tools to review wildfire claims, with California's insurance commissioner opening a parallel market conduct examination.

The industry isn't ignoring the problem entirely. Forty-two percent of carriers plan to hire a Chief AI Officer within 12 months, and new roles like Algorithm Oversight Specialists and AI Ethics Officers are appearing across the sector. The gap between stated intention and practice remains large, though. Most carriers haven't scaled AI company-wide, and Colorado remains the only state requiring insurers to inventory and test every pricing algorithm for discriminatory outcomes.

How Much Can Telematics Save You on Car Insurance?

Telematics programs promise lower rates in exchange for sharing your driving data, and more than 21 million U.S. policyholders have signed up. The average cost of car insurance runs roughly $1,700 per year nationally. Progressive's average renewal savings of $322 per year after Snapshot completion represent about 19% of that figure for drivers who qualify. Consumer Reports found a median savings of $120 per year across telematics programs. Policygenius reported an average of $332 per year for drivers who did save.

The full picture is less generous. The Consumer Federation of America found that less than 31% of enrolled drivers actually saw their premiums decrease. Twenty-four percent saw increases. Forty-five percent saw no change at all. Only about 15% of eligible policyholders enroll in the first place. The gap between marketed savings and actual outcomes compounds the broader trust deficit: a Swiss Re survey found that while 80% of customers trust their specific insurer to handle data responsibly, only 39% of consumers trust insurers as a category with personal data, ranking them below banks and health companies.

Premium decreased
Less than 31%
–$120 to –$332 per year
No change
45%
$0
Premium increased
24%
Higher rates despite participation

Data sharing extends well beyond driving. Forty-four percent of Americans own health-tracking wearable devices, and 54.5% say they'd share wearable data for tailored insurance policies. Embedded insurance, which bundles coverage into purchases like car loans and electronics, is projected to surpass $180 billion in gross written premium by 2026. Root Insurance already gets 44% of new policy sales through embedded partnerships with Carvana, Hyundai Capital and Experian. These distribution models depend on AI to price policies at the point of sale, extending the same fairness questions beyond traditional insurance channels.

What States Are Doing to Regulate AI in Insurance

A patchwork of state laws, federal directives and international rules now governs how insurers can use AI, with no single standard in place. Colorado's SB 21-169 remains the most aggressive state law. It requires insurers to inventory every algorithm and external data source used in pricing, test for discriminatory outcomes and submit annual compliance reports with attestation from a chief risk officer. The law expanded from life insurance to auto and health insurance in October 2025, and the state's broader AI Act begins enforcement in June 2026.

California's SB 1120, effective January 2025, takes a narrower approach focused on health insurance. The law prohibits coverage denial based solely on an AI algorithm and mandates physician review of medical necessity decisions. The California Department of Insurance issued formal guidance in May 2025, and Attorney General Bonta's legal advisories put carriers on notice. No formal enforcement actions have been reported yet, but the regulatory infrastructure is active.

Colorado
SB 21-169
Algorithm inventory, bias testing and annual compliance reports for life, auto and health pricing AI
Expanded to auto and health October 2025
California
SB 1120
Prohibits health coverage denial based solely on AI; requires physician review of medical necessity decisions
January 2025
New York
DFS Circular Letter 2024-7
Bias testing and explainability requirements for insurance AI models
2024
24 states + D.C.
NAIC Model Bulletin
Principle-based AI governance framework; formal Model Law in development
Adopted December 2023 onward
EU (insurers operating in Europe)
EU AI Act
Classifies insurance risk AI as high-risk; penalties up to €35 million or 7% of worldwide revenue
High-risk compliance deadline: August 2, 2026

At the federal level, the NAIC's model bulletin on AI, adopted in December 2023, offers a principle-based framework that 24 states plus Washington, D.C. had adopted by late 2025. The NAIC is moving toward a formal Model Law and piloting AI Systems Evaluation Tools in 10 to 12 states in early 2026. The organization also launched a request for information on third-party AI vendor oversight, with a model law anticipated later this year.

usMap icon
THE FEDERAL VS. STATE SHOWDOWN OVER AI RULES

President Trump signed Executive Order 14365 in December 2025, pushing for federal preemption of state AI laws. The order directs the DOJ to create an AI Litigation Task Force to challenge state regulations in court, explicitly naming Colorado's AI Act as problematic. It also instructs the FTC chairman to articulate how state AI laws may be preempted by federal statute.

The NAIC responded within days, expressing "deep concern" and calling on the administration to "affirm state regulation of AI in the business of insurance." The core tension is straightforward: states have always regulated insurance in the U.S., and the White House can't erase those laws by executive order alone. Multiple law firms (Latham & Watkins, Gibson Dunn, Paul Hastings) note that overturning state law requires either congressional action or court rulings. The McCarran-Ferguson Act, which specifically reserves insurance regulation to states, adds a further barrier: only Congress can change it. State governors in California, Colorado and New York have said they will continue enforcing their AI laws regardless.

Internationally, the EU AI Act classifies insurance risk assessment AI as high-risk, with compliance requirements taking effect in August 2026. Penalties reach €35 million or 7% of worldwide revenue. The European Insurance and Occupational Pensions Authority issued draft guidance on AI governance in 2025, and its survey found that 50% of non-life carriers already deploy AI models. Insured catastrophe losses hit $107 billion globally in 2025, with 25% of P&C executives now using AI for storm risk assessment and 18% for wildfire risk, making the regulatory stakes around these models increasingly material.

Are AI-First Insurers Like Root and Lemonade Delivering for Consumers?

The financial trajectory of AI-native insurers offers a partial answer. Root Insurance posted $30.9 million in net income in 2024 and remained profitable through the first nine months of 2025, with year-to-date net income of roughly $35 million. Lemonade, while still unprofitable, narrowed its net loss from $202 million in 2024 to $165 million in 2025, grew premiums 31% and doubled its market cap to roughly $5.5 billion. The insurer's Q4 2025 was its strongest quarter, with a 52% gross loss ratio and near-breakeven adjusted EBITDA.

Oscar Health shows the limits of the AI-equals-profit narrative. After posting its first-ever annual profit in 2024 ($25.4 million on $9.2 billion in revenue), the health insurer swung to a $443 million loss in 2025, driven by a surge in sicker-than-expected ACA enrollees across the industry, not an AI failure. The reversal shows that AI tools can cut administrative costs (Oscar's improved 160 basis points year over year) without protecting carriers from broader market forces they don't control. MoneyGeek's look at the profit paradox facing health insurers puts Oscar's swing in wider industry context.

The workforce picture is mixed. Half of the current insurance workforce is projected to retire within 15 years, creating more than 400,000 open positions. GEICO reduced staffing roughly 20% between 2022 and 2023 (from 38,285 to 30,584 employees), and Allianz Partners signaled plans to cut 1,500 to 1,800 call center positions. The Bureau of Labor Statistics projects overall employment of claims adjusters, appraisers, examiners and investigators will decline 5% from 2024 to 2034. Still, 93% of insurance CEOs plan workforce expansion in the next three years, and 42% plan to hire a Chief AI Officer within 12 months. NIB Health Funds saved $22 million and cut customer service costs 60% through AI, but its approach paired AI with humans rather than replacing them, which is the model most policyholders say they want.

How to Protect Yourself From Unfair AI Insurance Decisions

Consumers aren't powerless as AI reshapes insurance. The regulatory and legal shifts of 2025 created new tools and rights, and basic comparison shopping is more valuable than ever as pricing algorithms diverge across carriers.

  1. 1

    Compare quotes from at least three to five carriers. AI-driven pricing means your rate can vary sharply depending on which variables a carrier's model weighs. Two insurers pricing the same driver can land on very different premiums. MoneyGeek's rankings of the best car insurance companies and cheapest car insurance companies are a good starting point for comparison.

  2. 2

    Ask your insurer if AI is used in pricing or claims decisions. Several states (Colorado, New York) now require disclosure of algorithmic decision-making tools.

  3. 3

    Know your appeal rights. Under California SB 1120 and similar state laws, health insurers cannot deny coverage based solely on an AI algorithm. If your claim is denied, request a physician review.

  4. 4

    Check your state's regulatory stance. Twenty-four states have adopted the NAIC model bulletin on AI. Colorado and New York have the strongest consumer protections. Your state insurance commissioner's website lists complaint procedures.

  5. 5

    Be selective about data sharing. Telematics programs benefit some drivers and penalize others. If you're offered a telematics discount, ask what happens if the program determines you're a higher risk. Progressive's Snapshot program raises rates for roughly 20% of participants.

Filing complaints with your state insurance commissioner creates a record. The NAIC is building AI-specific examination tools for state regulators, and early consumer complaints help inform where enforcement resources go. In an industry where 86% of consumers prefer a human agent for complex decisions, asking to speak with a person for claims disputes and coverage questions remains one of the simplest protections available.

Frequently Asked Questions About AI in Insurance

Seven questions come up most often when consumers ask about AI and insurance. The answers below draw on the same data, lawsuits and state laws covered in this article, compressed into direct responses you can act on.

About Nathan Paulus


Nathan Paulus headshot

Nathan Paulus is the Head of Content at MoneyGeek, where he conducts original data analysis and oversees editorial strategy for insurance and personal finance coverage. He has published hundreds of data-driven studies analyzing insurance markets, consumer costs and coverage trends over the past decade. His research combines statistical analysis with accessible financial guidance for millions of readers annually.

Paulus earned his B.A. in English from the University of St. Thomas, Houston.


sources
Copyright © 2026 MoneyGeek.com. All Rights Reserved