NIST AI RMF Generative AI Use Case Profiles

June 10, 2024
Authored by
Ella Shoup
AI Policy Associate at Holistic AI
NIST AI RMF Generative AI Use Case Profiles

On Monday 29 April 2024, the National Institute of Standards and Technology (NIST) published a draft AI RMF Generative AI Profile. Designed to be a companion piece to its AI Risk Management Framework (AI RMF), the Generative AI Use Case Profile guides organizations to identify and response to risks posed by generative AI. The Profiles were released alongside three other draft documents focused on generative AI (GAI), including Secure Software Development Practices for Gen AI, and Dual Use Foundation ModelsReducing Risks Posed by Synthetic Content, and a Plan for Global Engagement on AI Standards. These guidelines are part of NIST’s mandate under President Biden’s Executive Order on AI . Like the AI RMF, all draft documents are voluntary and cross-sectoral. In this blog post, we outline what you need to know about NIST’s Generative AI Use Case Profiles.

Understanding NIST’s Generative AI Profile Framework

The AI RMF Generative AI Profile serves as both a use-case and cross-sectoral profile of the AI RMF 1.0. Use-case profiles offer insights into implementing the AI RMF functions for specific applications, while cross-sectoral profiles govern risks associated with common activities across sectors. By delineating risks and actions, this framework provides a roadmap for managing GAI-related challenges across various stages of the AI lifecycle.

Similar to the AI RMF process, the Generative AI Profile is open for comments until 2 June 2024, giving the GAI community an opportunity to shape the final framework. Identifying Risks According to the Generative AI Use Case Profiles.

The draft Use Case Profiles highlight a spectrum of risks unique to GAI, ranging from the proliferation of dangerous content to environmental impacts. These risks include:

  • CBRN Information: Lowered barriers to entry or eased access to materially nefarious information related to chemical, biological, radiological, or nuclear (CBRN) weapons, or other dangerous biological materials.
  • Confabulation: The production of confidently stated but erroneous or false content (known colloquially as “hallucinations” or “fabrications”).
  • Dangerous or Violent Recommendations: Eased production of and access to violent, inciting, radicalizing, or threatening content as well as recommendations to carry out self-harm or conduct criminal or otherwise illegal activities.
  • Data Privacy: Leakage and unauthorized disclosure or de-anonymization of biometric, health, location, personally identifiable, or other sensitive data
  • Environmental: Impacts due to high resource utilization in training GAI models, and related outcomes that may result in damage to ecosystems.
  • Human AI Configuration: Arrangement or interaction of humans and AI systems which can result in algorithmic aversion, automation bias or over-reliance, misalignment or misspecification of goals and/or desired outcomes, deceptive or obfuscating behaviors by AI systems based on programming or anticipated human validation, anthropomorphization, or emotional entanglement between humans and GAI systems; or abuse, misuse, and unsafe repurposing by humans
  • Information Integrity: Lowered barrier to entry to generate and support the exchange and consumption of content which may not be vetted, may not distinguish fact from opinion or acknowledge uncertainties, or could be leveraged for large-scale dis- and misinformation campaigns.
  • Information Security: Lowered barriers for offensive cyber capabilities, including ease of security attacks, hacking, malware, phishing, and offensive cyber operations through accelerated automated discovery and exploitation of vulnerabilities; increased available attack surface for targeted cyber-attacks, which may compromise the confidentiality and integrity of model weights, code, training data, and outputs.
  • Intellectual Property: Eased production of alleged copyrighted, trademarked, or licensed content used without authorization and/or in an infringing manner; eased exposure to trade secrets; or plagiarism or replication with related economic or ethical impacts.
  • Obscene, Degrading, and/or Abusive Content: Eased production of and access to obscene, degrading, and/or abusive imagery, including synthetic child sexual abuse material (CSAM), and nonconsensual intimate images (NCII) of adults.
  • Toxicity, Bias, and Homogenization: Difficulty controlling public exposure to toxic or hate speech, disparaging or stereotyping content; reduced performance for certain sub-groups or languages other than English due to non-representative inputs; undesired homogeneity in data inputs and outputs resulting in degraded quality of outputs.
  • Value Chain and Component Integration: Non-transparent or untraceable integration of upstream third-party components, including data that has been improperly obtained or not cleaned due to increased automation from GAI; improper supplier vetting across the AI lifecycle; or other issues that diminish transparency or accountability for downstream users.
Identifying Risks According to GenAI Use Case Profiles

Taking Action against generative AI risks

NIST provides various proactive measures organizations can take to mitigate the risks of GAI. These actions, categorized under the AI RMF Core functions—Govern, Map, Measure, and Manage—provide a structured approach to risk management:

  • Governance: Establish clear policies and guidelines for GAI development and deployment, ensuring ethical and responsible use.
  • Mapping: Identify potential risks and vulnerabilities across the AI lifecycle, from development to decommission.
  • Measurement: Implement metrics and benchmarks to assess the effectiveness of risk mitigation strategies.
  • Management: Develop robust mechanisms for monitoring, detecting, and responding to GAI-related risks in real-time.
AI RMF Core Functions

Tailoring generative AI risk mitigation actions to context

It's essential to recognize that not all actions may be relevant to every organization. The framework emphasizes the importance of tailoring risk management strategies according to an organization’s unique context and situation. Consequently, organizations should assess their risk tolerance and resource capabilities to prioritize actions effectively. Nonetheless, some actions –such as many of those under the Govern function – are considered “foundational,” meaning that they should be treated as fundamental tasks for GAI risk management. This aligns with the overall recommendation of the AI RMF Core, which emphasizes the Govern function as the bedrock mechanism of the entire resource.

For example, let’s look at the function Govern 1.1 which is considered foundational:

Govern 1.1: Legal and regulatory requirements involving AI are understood, managed, and document.
Action GAI Risk
Align GAI use with applicable laws and policies, including those related to data privacy and the use, publication, or distribution of licensed, patented, trademarked, copyrighted, or trade secret material. Data Privacy, Intellectual Property
Disclose use of GAI to end users. Human AI Configuration

While these actions are voluntary under the NIST generative AI guidelines, organizations may still be compelled to follow these actions in certain jurisdictions, where for example, they are required to disclose GAI to end users.

Contrastingly, for Manage 4.2 – which is not considered foundational – NIST recommends certain actions regarding organizational practice, a topic for which legal requirements are not typically applied:

Manage 4.2: Measurable activities for continual improvements are integrated into AI system updates and include regular engagement with interested parties, including relevant AI actors.
Action GAI Risk
Adopt agile development methodologies, and iterative development and feedback loops to allow for rapid adjustments based on external input related to content provenance. Data Privacy, Intellectual Property
Employ explainable AI methods to enhance transparency and interpretability of GAI content provenance to help AI actors and stakeholders understand how and why specific content is generated. Human AI Configuration, Information Integrity

Prioritize AI Governance

Although voluntary, implementing an AI risk management framework can increase trust and increase your ROI by ensuring your AI systems perform as expected. Holistic AI’s Governance Platform is a 360 solution for AI trust, risk, security, and compliance and can help you get ahead of evolving AI standards. Schedule a demo to find out how we can help you adopt AI with confidence.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call