Why Joint US-UK AI Safety Guidance Matters for Governance and Standardization

January 3, 2024
Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Why Joint US-UK AI Safety Guidance Matters for Governance and Standardization

While guidelines may not carry the same legal weight as legislation, their influence is undeniable, especially when major markets like the US and UK align on a framework that could potentially shape future laws. In November 2023, this became a reality as the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) released joint Guidelines for Secure AI System Development.

This collaboration is crucial in a landscape where recent polls have shown that AI plays a role in 85% of recent cyber attacks. As organizations evolve their AI governance, aligning with these guidelines is not only a strategic move in cybersecurity but also a proactive step in anticipation of possible regulatory requirements to all AI systems.

Despite the unpredictable nature of future legislation and the divergent paths historically taken by the US and UK in AI regulation, this joint stance on AI security signals a convergence of priorities. This guide aims to demonstrate why CTOs, CISOs, and CDOs should prioritize these guidelines to support enhanced governance, standardization, and ensuing efficacy across AI initiatives.

The TL;DR of the Guidelines for Secure AI System Development

On November 26th, 2023, the U.S. Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) published joint Guidelines for Secure AI System Development in cybersecurity.

How the Guidelines define AI?

The Guidelines define AI as applications that utilize machine learning – software that learns from data on its own, without needing specific instructions. These tools can then use what they've learned to make predictions, suggestions, or even decisions based on patterns and trends they've discovered using statistical reasoning.

Who do the Guidelines apply to?

In short, everyone involved with AI, including vendors, internal tool builders, and end users. This may seem like a bit of an overreach. But the Guidelines provide some justification.

Responsibility for AI safety can get tricky. Supply chains are often complex, with multiple companies involved. This can blur the lines of who's responsible for keeping the AI secure.

Additionally, the guidelines are intentionally addressed to support transparency and safety for a wide variety of stakeholders. In the words of CISA’s announcement on the Guidelines:

“The Guidelines apply to all types of AI systems, not just frontier models. We provide suggestions and mitigations that will help data scientists, developers, managers, decision-makers, and risk owners make informed decisions about the secure design, model development, system development, deployment, and operation of their machine learning AI systems.”

But 360 degree AI safety is tough. What do the Guidelines recommend?

The Guidelines suggest a "secure-by-design" approach. This means providers take the lead in securing the AI system, even if it's used by others. Think of it like building a safe car, even if someone else drives it. Additionally, the Guidelines note that for ongoing security and safety purposes, end users and their education play a role as well. Providers should be upfront with users about the risks and how to use the AI safely.

Like a range of other AI standards, the Guidelines recognize not all AI is equally risky. Systems that could pose harm to people and reputations, or leak sensitive information, should be treated as “critical” with greater attention to security.

What’s actionable about the Guidelines today?

The Guidelines break down suggestions into AI system design, development, deployment, operations, and maintenance. We’ll take a deeper dive into the precise recommendations below. At a high level, suggestions fall into the following initiatives:

Why should I care?

Notwithstanding the recent publication of G7 International Guiding Principles and Code of Conduct on Governing Advanced AI systems, the joint CISA and NCSC publication is one of the first international initiatives on AI governance. It could pave the way for increased global cooperation and standardization in the regulation of AI. It’s also one of the first nationwide pieces of guidance in the US.

Additionally, high-profile frameworks are often pulled from to create standards and legally binding legislation. Most large organizations operate in the United States, United Kingdom, or both, and could de-risk future AI program growth by aligning processes with the Guidelines today.

The cyber security-focus of the document creates a golden opportunity for organizations with established cybersecurity postures to leverage their existing infrastructure and accelerate AI governance. By embracing robust practices, they can not only minimize risk (bias, vulnerabilities, explainability) but also enhance outcomes and boost alignment across security and AI operations including:

  • Boosting efficiency: Ensure your AI systems make sound decisions and deliver optimal results.
  • Increasing adoption: Teams that trust outputs and safety of AI systems are likelier to fully utilize systems.
  • Building trust: 62% of consumers report they place higher trust in companies with whom AI interactions seem ethical, indicating the business value of unbiased AI.

Below we’ll work through the recommendations presented in the Guidelines by AI product lifecycle stage.

Recommendations from the Guidelines for Secure AI System Development by stage

Recommendations for the design stage

As AI systems become increasingly complex and integrated into critical applications, the potential for cyberattacks, biases, and unintended consequences grows. To mitigate these risks, secure design principles are essential from the very beginning of the development process.

  • Raise staff awareness of threats, risks, and mitigations by ensuring system owners, data scientists, and users have adequate information, documentation, and procedures for secure and responsible AI
  • Use a holistic process to model threats to a system to understand potential impacts if a component is compromised or behaves unexpectedly and document decisions to protect against vulnerabilities
  • Design systems for security, functionality, and performance, considering supply chain security for components, integrating system development into existing secure development and operations best practices, and ensuring there are appropriate restrictions and fail-safes to address AI-specific risks
  • Consider security benefits and trade-offs when selecting models, including model architecture, configuration, training data, training algorithm, and hyperparameters to balance complexity, appropriateness, explainability, training data requirements, integration of supply chain components, and privacy

Recommendations for the development stage

Building out of design, development of complex systems leads to many trade-offs and decisions with long lasting impact. To mitigate future risk (and security risk attached to early versions of the system), the joint statement urges the following in AI system development:

  • Secure the supply chain by monitoring its security across a system’s life cycle and supporting suppliers to adhere to system standards or risk management policies
  • Identify, track, and protect assets through an inventory that tracks value, investment, versions, and vulnerabilities to attackers and controlling access to logs
  • Document data, models, and prompts, ensuring there is security-relevant information including training data sources, scope and limitations, guardrails, review frequency, and failure modes using structures such as model cards and data cards
  • Manage technical debt, or engineering decisions that do not meet best practices to achieve short-term aims, throughout the system’s life cycle and apply lessons learned to future similar systems

Recommendations for the deployment stage

While there are security implications to nearly any software deployment, AI – and particularly generative AI systems – require the consideration of additional security risk factors. The Guidelines outline the following focus areas through AI system deployment:

  • Apply good infrastructure security principles throughout the system’s lifecycle and apply access controls to components of the system to segregate sensitive code or data and mitigate cybersecurity attacks
  • Continuously protect models against direct or indirect attacks or unauthorized access or tampering by implementing controls and following cybersecurity best practices as well as implementing privacy-preserving practices
  • Develop incident management procedures to escalate and remedy security incidents that consider different scenarios and ensure that responders have the appropriate training to assess and address incidents, in combination with high-quality audit logs and security features
  • Release AI responsibly after appropriate and effective security evaluations including benchmarking and red teaming, ensuring that any limitations are communicated to users
  • Assess configuration options in terms of business benefits and security risks to control against malicious use and threats, and communicate to users the security considerations they are responsible for

Recommendations for the operations and maintenance stage

The joint guidelines finally consider secure operation and maintenance through four brief considerations:

  • Monitor system performance to identify and account for security risks, intrusions, and data drift
  • Monitor system inputs in line with privacy and data protection requirements to maximize compliance and facilitate audits, investigation, and remediation
  • Follow a secure-by-design approach to updates, favoring modular updates that consider how changes to data, models, or prompts might impact system behavior
  • Collect and share lessons learned across the global ecosystem to share best practices and maintain communication lines for internal and external feedback

From regulatory preparedness to competitive advantage

As data and tech leaders begin to lay the foundation for safety in AI design, development, deployment, and maintenance, aligning around widely accepted guidelines in your key markets is a logical starting point. But it’s just that. The true benefits of trustworthy AI move beyond circumvention of legal, financial and reputational damage and extend to more reliable AI impact: through efficacy, internal trust, and lack of down time.

AI risk management is now a competitive necessity that early evangelists are beginning to perfect. As the only provider of a 360-degree AI governance, risk, and compliance platform, Holistic is enabling many of these leading teams. Want to chat through your AI initiatives with one of our policy and ML experts? Schedule a free consultation today.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call