×
Policy Hour
panel & networking
Online Webinar

NIST AI RMF and the Biden-Harris Executive Order on AI, Explained

Monday, 26 February 2024 – 1 pm ET / 10 am PST
Register for this event
Register for this event

In our latest Policy Hour, we provided a high-level overview of the National Institute of Science and Technology (NIST)’s AI Risk Management Framework (RMF) and the Biden-Harris Executive Order on AI.

Holistic AI’s Co-Founder & Co-CEO Emre Kazim and NIST’s Martin Stanley gave a run-down of the AI RMF and the Executive Order (EO) 14110, discussed NIST’s mandate under the EO, and offered practical insights on how enterprises can easily adopt the AI RMF.

During the discussion, we received many questions from the audience which we were unfortunately not able to get through during the event, so we have put together a Q&A.

Below we have included the full recording of the event, as well as the slides and Q&A.

Holistic AI collaboration with NIST

Holistic AI is also proud to announce that we have been selected as an inaugural member of NIST’s U.S. AI Safety Consortium (AISIC). The creation of AISIC is a step towards supporting the U.S. Artificial Intelligence Safety Institute (USAISI), an Institute established under the NIST to advance safe and trustworthy AI. AISIC aims to develop science-based and empirically-backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world.

We are excited about the opportunity to collaborate with the AISIC, engage in working groups, and share our knowledge in creating and implementing trustworthy AI systems.

Q&A


The two key intentions behind the AI RMF were to:

  1. Identify what were the standard needs in this area and how could the federal government engage that in a constructive way
  2. Articulate the concept of trustworthiness, which is laid out explicitly in the framework
There was an overarching need to look at the ability to manage the technology risks of AI as it has manifested to people, organisations, ecosystems, and the planet, and how this differs significantly from normal risk management in technology where they goal is to only look specifically at the systems side of things.

This could be a possibility. With the current state of global AI regulations and crosswalks between them, the AI RMF can be leveraged to meet various objectives.

There are 7 characteristics of trustworthy AI:

  1. Safe
  2. Secure and resilient
  3. Explainable and interpretable
  4. Privacy-enhanced
  5. Fair with harmful bias managed
  6. Valid and reliable
  7. Accountable and transparent

The intent behind formulating these characteristics is to break down the high-level concerns and opportunities surrounding how people and technologies interact with each other in a sociotechnical system. These characteristics also have to be looked at in a context-specific manner as different AI systems have different trustworthy characteristics with corresponding concerns and opportunities depending on the context of use.

The key aim with the AI RMF is to enable organisations to develop a risk-aware culture. Firms need to have employees directly involved in risk management who are able to articulate what it means to have people interacting with this technology within a particular mission space, and not conduct AI risk management siloed from the rest of the organisation. The framework is, by design, use-case agnostic and can be appropriated across sectors and domains. Within a good risk-aware culture, this shouldn’t be merely a tick-box exercise.

The AI RMF Playbook, a companion resource to the AI RMF, provides specific actions under 4 main functions that can be taken to operationalize the framework and meet listed outcomes. At its core, a strong governance structure underpins the culture of risk management within organisations. Once that is established, AI RMF users can then map, measure, and manage risks within the AI system lifecycle, which should become an iterative and cross-functional process.

The key difference is that traditional cybersecurity and even privacy risk frameworks look at how to manage risks to systems in data, while frameworks for AI need to look at how to manage the risks in the context of harms and benefits to people. Many traditional frameworks are also solely focused on negative harms, while in the AI ecosystem we often take on risks because there's a benefit on the other side of it.

Like the OECD Recommendation on AI and the Blueprint for an AI Bills of Rights, the NIST AI RMF is not compulsory. While the EU AI Act is compulsory, it shares some similarities with the NIST AI RMF regarding their focus on risk. NIST has provided an illustration of how the AI RMF trustworthiness characteristics relate to the OECD Recommendation on AI, the proposed EU AI Act, the Biden Harris EO, and the Blueprint for an AI Bill of Rights.

There is no formal relationship between the NIST and the International Standardization Body (ISO). However, both take a similar approach on how organisational processes can be developed to ensure the development of trustworthy AI systems. The ISO/IEC 42001 Information Technology – Artificial Intelligence – Management System standard specifies requirements for establishing, implementing, and maintaining an AI Management System (AIMS) within organizations. Both the AI RMF and ISO/IEC 42001 offer a flexible structure that is designed to be applicable across different sectors and types of AI applications.

However, the NIST AI RMF is more focused on the risk management of AI while the ISO 42001 focuses on providing structure for an AI management system. It is a process standard, whereas the NIST AI RMF is provides comprehensive guidance on the process, mapping and measurement mechanisms associated with an AI system. Lastly, ISO standards are global while NIST is US-specific, with the potential to have global impact.

However, and in with the motive to harmonize AI Risk Management practices across jurisdictions, NIST has provided a crosswalk to help organizations adopting the NIST AI RMF to also seamlessly conform with ISO/IEC 42001.

Since NIST is not a regulator, organizations will not have to provide documentation to NIST. However, organizations who implement the AI RMF are advised to document their AI risk management activities. For instance, they recommend documenting evidence that the organisation has complied with applicable laws and regulations and how system performance metrics inform risk tolerance decisions. More guidance on this can be found in the AI RMF Playbook.

Using AI will ultimately make enterprises more competitive. When adopting AI usage in any capacity, organizations should be aware of the risks. NIST’s Martin Stanley emphasized that a risk-aware culture is key to responsible adoption: “If you’re looking to do AI risk management as a one-shot deal, you’re going to be anxious and you’re going to be disappointed.”

It varies across sector and federal agencies. In general, there has been a huge effort to engage on the topic and explore opportunities. For example, the U.S. Department of Homeland Security published a Privacy Impact Assessment (PIA) on the Use of Conditionally Approved Commercial Generative Artificial Intelligence Tools to explore how gen AI can be used across the department. It’s expected that more federal agencies will publish similar reports in the coming months.

Codification of best practice is likely far down the line. Specific audits or oversight of this technology in specific areas is covered in the Biden-Harris Executive Order, which calls on federal agencies to regulate how AI should be understood in the specific spaces they regulate. The accompanying Proposed Memorandum for the Heads of Executive Departments Agencies is available for public review.

In our latest Policy Hour, we provided a high-level overview of the National Institute of Science and Technology (NIST)’s AI Risk Management Framework (RMF) and the Biden-Harris Executive Order on AI.

Holistic AI’s Co-Founder & Co-CEO Emre Kazim and NIST’s Martin Stanley gave a run-down of the AI RMF and the Executive Order (EO) 14110, discussed NIST’s mandate under the EO, and offered practical insights on how enterprises can easily adopt the AI RMF.

During the discussion, we received many questions from the audience which we were unfortunately not able to get through during the event, so we have put together a Q&A.

Below we have included the full recording of the event, as well as the slides and Q&A.

Holistic AI collaboration with NIST

Holistic AI is also proud to announce that we have been selected as an inaugural member of NIST’s U.S. AI Safety Consortium (AISIC). The creation of AISIC is a step towards supporting the U.S. Artificial Intelligence Safety Institute (USAISI), an Institute established under the NIST to advance safe and trustworthy AI. AISIC aims to develop science-based and empirically-backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world.

We are excited about the opportunity to collaborate with the AISIC, engage in working groups, and share our knowledge in creating and implementing trustworthy AI systems.

Q&A


The two key intentions behind the AI RMF were to:

  1. Identify what were the standard needs in this area and how could the federal government engage that in a constructive way
  2. Articulate the concept of trustworthiness, which is laid out explicitly in the framework
There was an overarching need to look at the ability to manage the technology risks of AI as it has manifested to people, organisations, ecosystems, and the planet, and how this differs significantly from normal risk management in technology where they goal is to only look specifically at the systems side of things.

This could be a possibility. With the current state of global AI regulations and crosswalks between them, the AI RMF can be leveraged to meet various objectives.

There are 7 characteristics of trustworthy AI:

  1. Safe
  2. Secure and resilient
  3. Explainable and interpretable
  4. Privacy-enhanced
  5. Fair with harmful bias managed
  6. Valid and reliable
  7. Accountable and transparent

The intent behind formulating these characteristics is to break down the high-level concerns and opportunities surrounding how people and technologies interact with each other in a sociotechnical system. These characteristics also have to be looked at in a context-specific manner as different AI systems have different trustworthy characteristics with corresponding concerns and opportunities depending on the context of use.

The key aim with the AI RMF is to enable organisations to develop a risk-aware culture. Firms need to have employees directly involved in risk management who are able to articulate what it means to have people interacting with this technology within a particular mission space, and not conduct AI risk management siloed from the rest of the organisation. The framework is, by design, use-case agnostic and can be appropriated across sectors and domains. Within a good risk-aware culture, this shouldn’t be merely a tick-box exercise.

The AI RMF Playbook, a companion resource to the AI RMF, provides specific actions under 4 main functions that can be taken to operationalize the framework and meet listed outcomes. At its core, a strong governance structure underpins the culture of risk management within organisations. Once that is established, AI RMF users can then map, measure, and manage risks within the AI system lifecycle, which should become an iterative and cross-functional process.

The key difference is that traditional cybersecurity and even privacy risk frameworks look at how to manage risks to systems in data, while frameworks for AI need to look at how to manage the risks in the context of harms and benefits to people. Many traditional frameworks are also solely focused on negative harms, while in the AI ecosystem we often take on risks because there's a benefit on the other side of it.

Like the OECD Recommendation on AI and the Blueprint for an AI Bills of Rights, the NIST AI RMF is not compulsory. While the EU AI Act is compulsory, it shares some similarities with the NIST AI RMF regarding their focus on risk. NIST has provided an illustration of how the AI RMF trustworthiness characteristics relate to the OECD Recommendation on AI, the proposed EU AI Act, the Biden Harris EO, and the Blueprint for an AI Bill of Rights.

There is no formal relationship between the NIST and the International Standardization Body (ISO). However, both take a similar approach on how organisational processes can be developed to ensure the development of trustworthy AI systems. The ISO/IEC 42001 Information Technology – Artificial Intelligence – Management System standard specifies requirements for establishing, implementing, and maintaining an AI Management System (AIMS) within organizations. Both the AI RMF and ISO/IEC 42001 offer a flexible structure that is designed to be applicable across different sectors and types of AI applications.

However, the NIST AI RMF is more focused on the risk management of AI while the ISO 42001 focuses on providing structure for an AI management system. It is a process standard, whereas the NIST AI RMF is provides comprehensive guidance on the process, mapping and measurement mechanisms associated with an AI system. Lastly, ISO standards are global while NIST is US-specific, with the potential to have global impact.

However, and in with the motive to harmonize AI Risk Management practices across jurisdictions, NIST has provided a crosswalk to help organizations adopting the NIST AI RMF to also seamlessly conform with ISO/IEC 42001.

Since NIST is not a regulator, organizations will not have to provide documentation to NIST. However, organizations who implement the AI RMF are advised to document their AI risk management activities. For instance, they recommend documenting evidence that the organisation has complied with applicable laws and regulations and how system performance metrics inform risk tolerance decisions. More guidance on this can be found in the AI RMF Playbook.

Using AI will ultimately make enterprises more competitive. When adopting AI usage in any capacity, organizations should be aware of the risks. NIST’s Martin Stanley emphasized that a risk-aware culture is key to responsible adoption: “If you’re looking to do AI risk management as a one-shot deal, you’re going to be anxious and you’re going to be disappointed.”

It varies across sector and federal agencies. In general, there has been a huge effort to engage on the topic and explore opportunities. For example, the U.S. Department of Homeland Security published a Privacy Impact Assessment (PIA) on the Use of Conditionally Approved Commercial Generative Artificial Intelligence Tools to explore how gen AI can be used across the department. It’s expected that more federal agencies will publish similar reports in the coming months.

Codification of best practice is likely far down the line. Specific audits or oversight of this technology in specific areas is covered in the Biden-Harris Executive Order, which calls on federal agencies to regulate how AI should be understood in the specific spaces they regulate. The accompanying Proposed Memorandum for the Heads of Executive Departments Agencies is available for public review.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call