AI Risk Management is becoming a top global priority. There have been many high-profile cases of harm caused by the use of AI and algorithms, from discrimination in credit scoring and insurance, to unreliable trading algorithms. Governments and regulators are now cracking down.
Several countries have proposed legislation and frameworks to ensure that AI is developed, used and governed in a responsible and ethical way, to prevent further harms.
The U.S. has been leading the charge. At the federal level, an Algorithmic Accountability Act has been proposed, and the National Institute for Standards and Technology (NIST) has produced an AI Risk Management Framework. States and other jurisdictions have passed or proposed laws which regulate AI use in specific areas. Illinois has enacted the Artificial Intelligence Video Interview Act, which requires employers to notify job applicants that their video interviews are being screened by algorithms.
The New York City Council passed legislation mandating bias audits of automated tools used to make decisions about hiring candidates and promoting employees. Legislation enacted in Colorado prevents insurance providers from using biased algorithms or data to make decisions.
Finally, Washington DC proposed the Stop Discrimination by Algorithms Act, to prevent discrimination in automated decisions about employment, housing, and public accommodation, and to require audits for discriminatory patterns.
Demonstrating their commitment to managing the risks of AI, the White House Office of Science and Technology Policy has recently published the Blueprint for an AI Bill of Rights: Making Automated Systems work for the American People.
This is a White House white paper, designed to protect US citizens from the harms that AI can cause.
The Blueprint signals President Biden’s vision of how AI should be governed. It will be used to inform future U.S. policy and legislation.
The Blueprint sets out 5 principles that should guide the design, use and deployment of AI and automated systems:
1. Safe and effective systems: You should be protected from unsafe or ineffective systems
2. Algorithmic discrimination protections: you should not face discrimination from algorithms and systems should be used and designed in an equitable way
3. Data privacy: you should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used
4. Notice and Explanation: you should know that an automated system is being used and understand how and why it contributes to outcomes that impact you
5. Human Alternatives, Consideration and Fallback: you should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter
Based on insights from researchers, technologists, advocates, and policymakers, the White House also published ‘From Principles to Practice’, a technical companion to the Blueprint, to support organisations in implementing the framework.
AI Risk Management is climbing up the global agenda and is a priority issue for the U.S. Government. It is vital for protecting against the harms posed by automated systems, while maximising their value.
Being proactive and taking steps early to establish AI Risk Management processes is the only way you can achieve command and control over your automated systems. Request a demo to find out more about how Holistic AI can support you on this journey!
Written by Airlie Hilliard, Senior Researcher at Holistic AI. Follow her on Linkedin.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.