The National Institute of Standards and Technology (NIST), one of the leading voices in the development of artificial intelligence (AI) standards, launched the first version of the Artificial Intelligence Risk Management Framework (AI RMF 1.0) on 26 January 2023. Developed over the past 18 months, the AI RMF was created through a consensus-driven and open process, where more than 400 formal comments from 240 organizations helped shape the first iteration. Refining previous drafts, this is the official first version and is accompanied by an AI Playbook, which provides guidance to organizations on implementing the recommendations of the framework and will be updated periodically.
We sat down with NIST to discuss the AI RMF and learn about their vision for how it can be implemented. In this blog post, we discuss the key takeaways from the RMF, and insights shared with us.
According to NIST, ‘AI risks’ are defined as “the composite measure of an event’s probability of occurring and the magnitude or degree of the consequences of the corresponding event.” AI risks then contribute to the potential harms to people, and organizations resulting from the development and deployment of AI systems. These risks can stem from the data used to train and test the AI system, the system itself (i.e., the algorithmic model), the way the system is used and its resulting interaction with people (potential harms).
Recent examples of harm have resulted in legal action, include an ongoing lawsuit against insurance company State Farm due to allegations that their automated claims processing has resulted in algorithmic bias against black homeowners. In another recent case, Louisiana’s authorities came under fire when the use of facial recognition technology led to a mistaken arrest and an innocent man was jailed for a week. The risks posed by AI should not be considered without nuance, highlighting the importance of operationalizing the RMF as a way to effectively to dive into the more complex areas of risk.
In light of the several cases of AI harms seen in recent years, the purpose of the AI RMF is to support organizations to ‘prevent, detect, mitigate, and manage AI risks.’ It is designed to be non-prescriptive, industry and use-case agnostic, considering the vitality of context. It can also be used to determine what an organization's risk tolerance should be.
The end goal of the AI RMF is to promote the adoption of trustworthy AI, defined by NIST as high-performing AI systems which are safe, valid, reliable, fair, privacy-enhancing, transparent & accountable, secure & resilient, and explainable & interpretable.
Although the recommendations of the AI RMF are voluntary, they are aligned with the White House’s Blueprint for an AI Bill of Rights and many are based around organizational structure and procedures, meaning they do not present a significant financial burden for those that adopt them.
NIST’s work will influence future US legislation and global AI standards, as well as the activities of enterprises across the US. It will also work to promote a sense of public trust in evolving technologies such as AI.
The AI RMF is based around four key functions, with governance at the heart of this:
NIST recommends that the AI RMF be applied at the beginning of the AI lifecycle and that diverse groups of internal and external stakeholders involved in (or affected by) the process of designing, developing, and deploying AI systems should be involved in the ongoing risk management efforts. It is expected that effective risk management will encourage people to understand the downstream risks and potential unintended consequences of these systems, especially how they may impact people, groups, and communities.
The question of measurement is vital to the operationalization of the AI RMF and AI governance more broadly. From NIST’s events preceding the launch of the AI RMF, the following were identified as key questions and dilemmas regarding measurement:
Underpinning the AI RMF is a focus on moving beyond computational metrics and instead focusing on the socio-technical context of the development, deployment and impact of AI systems. The AI RMF was designed to be used to help improve public trustworthiness of AI. As such, the AI RMF seeks to address negative impacts of AI, such as the perpetuation or societal biases, discrimination and inequality working towards a framework that can help AI preserve civil rights and liberties. By promoting a rights-affirming approach, NIST anticipates the likelihood and degree of harm will decrease.
Promoting a rights-affirming approach means moving away from just looking at data representation when dealing with issues to do with mitigating bias and discrimination. Instead, it is critical to understand both the social and technical aspects of your AI system. From what a system is trained on, to how it is trained/continues to learn is significant, especially from an interdisciplinary perspective. For example, when considering trustworthy or responsible AI, the onus lies on more than just developers and engineers. Bias and discrimination can be predicted and mitigated by pushing teams across disciplines to collaborate, such as social scientists with data scientists and investing into external assurance.
Holistic AI has pioneered the field of trustworthy AI and empowers enterprises to adopt and scale AI confidently. In line with the importance of a socio-technical approach highlighted by NIST, our team has the interdisciplinary expertise needed to identify and mitigate AI risks, with our approach being informed with AI-relevant policy.
Get in touch with a team member to find out how Holistic AI can help you take steps towards external assurance and risk management.
Written by Ashyana-Jasmine Kachra, Policy Associate at Holistic AI & Airlie Hilliard, Senior Researcher at Holistic AI.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our automated AI Risk Management platform empowers your enterprise to confidently embrace AIGet Started