Artificial intelligence (AI) is increasingly being adopted across all sectors, with the global revenue of the AI market set to grow by 19.6% each year and reach $500 billion in 2023. One sector capitalising on the benefits that AI and the associated automation can bring is the insurance industry, with AI used in three key activities:
While bias in insurance practices is not a new thing – there is a long history of insurance providers redlining, or refusing service to, certain geographical areas aligning closely with racial make-up – AI can perpetuate existing biases.
For example, in the context of health insurance, to prioritise patient interventions based on prediction of disease onset and likelihood of hospitlisation. However, such analytics can discriminate against minority groups, who are often underrepresented in the data used to train the models. As a result, AI can prioritise less-sick white patients for intervention over sicker black patients, which can result in patients not receiving the help that they need if this bias is not recognised or addressed.
Likewise, AI used to determine car insurance premiums can discriminate against those from minority groups; people living in predominantly minority areas are given higher quotes than those with a similar risk score living in non-minority areas. In the US, individuals living minority areas in California, Texas and Missouri can be charged around 10% more than those living in non-minority areas, even when race is not considered by the model since other variables, such as zip code, can act as a proxy.
Risk management frameworks can help to ensure that the data used to create the models used in insurance is high-quality and unbiased, that the models do not result in biased outcomes, and that there is appropriate accountability for the decisions made by AI. While some companies will opt to do this voluntarily, AI risk management in the insurance sector will soon be required by law.
Under the propose Act, high-risk systems – including those used in insurance – cannot be put on the EU market until they have undergone conformity assessments to check whether the system complies with the requirements of the legislation. Systems that pass the assessment must then bear the CE logo and be registered on an EU database before they can be placed on the market.
Following any major changes to the system, such as if the model is retrained on new data or some features are removed from the model, the system must the undergo additional conformity assessments to ensure that the requirements are still being met, before being re-certified and registered in the database.
In Colorado, legislation that comes into effect on 1st January 2023 will prohibit insurance providers from using data, algorithms or other predictive models that result in unfair discrimination based on race, colour, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression. Under the legislation, insurers are required to:
Taking steps to manage the risks of AI early allows enterprises to take command and control over AI and embrace it with greater confidence.
To find out more about Holistic AI’s proprietary risk management platform and how we can help you manage the risks of your AI to get ahead of this upcoming regulation, schedule a demo or get in touch with us at email@example.com.
Written by Airlie Hilliard, Senior Researcher at Holistic AI. Follow her on Linkedin.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.