AI in Insurance Needs to be Safe and Fair

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Oct 5, 2022
read time
0
min read
share this
AI in Insurance Needs to be Safe and Fair

Ubiquitous adoption of AI in insurance

Artificial intelligence (AI) is increasingly being adopted across all sectors, with the global revenue of the AI market set to grow by 19.6% each year and reach $500 billion in 2023. One sector capitalising on the benefits that AI and the associated automation can bring is the insurance industry, with AI used in three key activities:

  • Purchasing: the time to purchase insurance is dramatically reduced and products are more customised
  • Underwriting:  underwriting and pricing of contracts is performed rapidly and uses a data-driven approach
  • Claims: claims can be automatically processed and evidence can be rapidly reviewed to swiftly settle a claim

AI in Insurance can bring novel risks

While bias in insurance practices is not a new thing – there is a long history of insurance providers redlining, or refusing service to, certain geographical areas aligning closely with racial make-up – AI can perpetuate existing biases.

For example, in the context of health insurance, to prioritise patient interventions based on prediction of disease onset and likelihood of hospitlisation. However, such analytics can discriminate against minority groups, who are often underrepresented in the data used to train the models. As a result, AI can prioritise less-sick white patients for intervention over sicker black patients, which can result in patients not receiving the help that they need if this bias is not recognised or addressed.

Likewise, AI used to determine car insurance premiums can discriminate against those from minority groups; people living in predominantly minority areas are given higher quotes than those with a similar risk score living in non-minority areas. In the US, individuals living minority areas in California, Texas and Missouri can be charged around 10% more than those living in non-minority areas, even when race is not considered by the model since other variables, such as zip code, can act as a proxy.

Regulation will require AI Risk Management in insurance

Risk management frameworks can help to ensure that the data used to create the models used in insurance is high-quality and unbiased, that the models do not result in biased outcomes, and that there is appropriate accountability for the decisions made by AI. While some companies will opt to do this voluntarily, AI risk management in the insurance sector will soon be required by law.

EU AI Act

AI in insurance has been identified as a high-risk practice under the forthcoming EU AI Act, meaning that insurance providers using AI are subject to risk management requirements concerning:

  • Using high-quality data to train models
  • Appropriate documentation practices
  • Transparency about interactions with AI systems
  • Adequate human oversight
  • Testing for accuracy and robustness

Under the propose Act, high-risk systems – including those used in insurance – cannot be put on the EU market until they have undergone conformity assessments to check whether the system complies with the requirements of the legislation. Systems that pass the assessment must then bear the CE logo and be registered on an EU database before they can be placed on the market.

Following any major changes to the system, such as if the model is retrained on new data or some features are removed from the model, the system must the undergo additional conformity assessments to ensure that the requirements are still being met, before being re-certified and registered in the database.

Colorado State

In Colorado, legislation that comes into effect on 1st January 2023 will prohibit insurance providers from using data, algorithms or other predictive models that result in unfair discrimination based on race, colour, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression. Under the legislation, insurers are required to:

  • Outline the type of external customer data and information sources used by their algorithms and predictive models
  • Provide an explanation of how the external consumer data and information sources, and algorithms and predictive models are used
  • Establish and maintain a risk management framework designed to determine whether the data or models unfairly discriminate
  • Provide an assessment of the results of the risk management framework and ongoing monitoring
  • Provide an attestation by one or more officers that the risk management framework has been implemented

Taking action early can help you take control over your AI

Taking steps to manage the risks of AI early allows enterprises to take command and control over AI and embrace it with greater confidence.

To find out more about Holistic AI’s proprietary risk management platform and how we can help you manage the risks of your AI to get ahead of this upcoming regulation, schedule a demo or get in touch with us at we@holisticai.com.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Take command of your AI ecosystem

Learn more
Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo