Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

How is AI in Insurance Being Regulated?

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Aug 10, 2023
read time
0
min read
share this
How is AI in Insurance Being Regulated?

Insurance is a legacy sector that has used rule-based and algorithmic decision-making for decades, from premium setting and underwriting to fraudulent claim detection. Artificial intelligence is further transforming these practices, with $8 billion being invested into insurtech start-ups alone between 2018 and 2019  and an estimated 85% of insurance executives experimenting with automation over the next three to five years. In particular, automation is being used to enhance:

  • Purchasing and distribution by allowing personalised quotes to be issued instantly.
  • Product offerings by facilitating more granular policies so that they can be tailored to the policyholder’s needs (e.g., mobile phone battery insurance instead of general phone insurance).
  • Underwriting using a data-driven approach to draft policies rapidly.
  • Claims processes by swiftly evaluating evidence provided by claimants, reducing the costs associated with the claims process by up to 30%.

Insurance algorithms can result in biased outcomes

However, AI and automation in insurance have come under fire multiple times for biased outcomes. Indeed, a ProPublica study uncovered patterns of racial discrimination in the algorithms used to set car insurance premiums, with drivers located in predominantly minority neighbourhoods in California, Illinois, Missouri and Texas being charged higher premiums than those living in non-minority or white neighbourhoods that have the same risk level.

Insurance provider State Farm have been accused of discriminatory treatment in a class action lawsuit filed in December 2022 by policyholder Jacqueline Huskey, who claims that the algorithms and tools used by State Farm to automate claims processing result in racial discrimination. The lawsuit cites a study from the Centre on Race, Inequality and the Law at the NYU School of Law that surveyed 800 black and white homeowners. The study found that black policyholders have more delays, more correspondences with the State Farm agents, and, overall, that their claims are met with more suspicion than that of their white counterparts. This violates the Fair Housing Act (42 U.S. Code § 3604 (a)(b)) and the Fair Housing Act (42 U.S. Code §). It is also claimed that State Farm’s algorithms and tools display levels of bias in the way data is analysed, where their use of natural language processing has resulted in a negative bias in voice analytics for black and white speakers and a negative association of typically “African American” names vs white names.

Policy makers are moving to regulate insurtech

It is important to note that while humans can be biased with premium setting – and there is indeed a long history of insurance providers redlining certain neighbourhoods – the use of algorithms has the potential to perpetuate and amplify these inequalities, resulting in discrimination at a greater rate than with manual premium setting. As such, policy makers are introducing regulation specifically targeting the algorithms used in insurance.

Colorado SB21-169

Signed into law on 6 July 2021 and effective 7 September 2021, Colorado’s Senate Bill 21-169 seeks to prohibit insurers from unfair discrimination based on protected characteristics such as race, colour, national or ethnic origin, religion, sex or gender, and disability, as well as the use of customer data or algorithms that lead to discriminatory outcomes.

Due to be enforced from 1 January 2023 at the earliest, the bill requires Colorado’s Insurance Commissioner to develop specific rules for different types of insurance policies and insurance activities in terms of identifying and preventing unfair discrimination. While any rules are yet to be finalised or enforced, Colorado’s Division of Insurance has held multiple stakeholder meetings as they develop rules for the underwriting of life insurance and automotive policies.

EU AI Act

With the aim of creating a global gold standard for AI, the European Commission’s EU AI Act, which was recently passed by majority vote in the European Parliament, sets out a risk-based approach to AI regulation, wherein systems that pose the greatest risk have the most stringent obligations. Under the most recent version of the text, AI systems used to influence eligibility decisions related to health and life insurance are considered high-risk, meaning that their legal use in the EU will be conditional on meeting the requirements for high-risk systems, including the establishment of a risk-management system, transparency obligations, data quality considerations, and appropriate levels of accuracy, robustness, and cybersecurity.

DC Stop Discrimination by Algorithms Act

Similarly, back across the pond in the US, DC policymakers are seeking to regulate a range of AI systems used in deciding eligibility for important life opportunities, including insurance. Indeed, the DC Stop Discrimination by Algorithms Act – which was first introduced in December 2021 and reintroduced in February 2023 – would prohibit discrimination using AI-driven insurance practices based on race, colour, religion, national origin, sex, gender expression or identity, sexual orientation, familial status, genetic information, source of income, or disability.

Action from the National Association of Insurance Commissioners

This prohibition of algorithmic discrimination is a recurring theme in the proposed laws, building on already existing equal employment laws. However, it is not just proposed laws that are stressing the need to ensure that insurance algorithms are bias-free; the National Association of Insurance Commissioners (NAIC) – a body representing insurance regulators in the US – has also been ramping up its initiatives to prevent unfair insurance practices.

In 2019, NAIC formed the Big Data and Artificial Intelligence Working Group to investigate how AI is being used in the insurance sector and its effects on consumer protection and privacy. As a result of its inquiry, the Group published Principles on AI in 2020, setting out non-binding expectations for the use of AI in insurance throughout its entire lifecycle. In particular, the Principles emphasise the importance of accountability, compliance, transparency, and safe, secure, and reliable system outputs, and were adopted by the full NAIC membership in 2020.

More recently, on 17 July 2023, NAIC published a model bulletin reminding insurance providers that AI-driven solutions must be used in compliance with existing insurance laws and regulations, including laws relating to unfair trading and claim settlement practices.

The bulletin also defines AI as:

a term used to describe machine-based systems designed to simulate human intelligence to perform tasks, such as analysis and decision-making, given a set of human-defined objectives.

It asserts that insurance providers should maintain a written program for AI systems to ensure that their use in decision-making doesn’t violate trade laws or other legal standards, where the program should address governance, risk management controls, and internal audit functions in a way that is proportionate to the use and reliance on AI systems.

Get compliant

Insurance tech is increasingly being targeted by policymakers and regulators, with fair decisions and a risk management requirements a recurring theme. Managing the risks of AI is an important task, but one that requires expert oversight – it doesn’t happen overnight. Schedule a call to find out how Holistic AI can help you manage the risks of AI in insurance and maintain a competitive edge.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Take command of your AI ecosystem

Learn more

Track AI Regulations in Real-time

Learn more
Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Get a demo