
On 15 April 2024, the American National Security Agency’s Artificial Intelligence Security Center (NSA AISC) released a joint guidance on Deploying AI Systems Securely in collaboration with the Cybersecurity & Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre (ASD ACSC), the Canadian Centre for Cyber Security (CCCS), the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom’s National Cyber Security Centre (NCSC-UK).
The guidance expands on previously released information sheets by the CISA on Guidelines for secure AI system development and Engaging with Artificial Intelligence. It also aligns with CISA’s Cross-Sector Cybersecurity Performance Goals and the National Institute of Standards and Technology's (NIST) Cybersecurity Framework (CSF).
In their guidance, the international agencies advise organizations that deploy AI systems to necessitate robust security measures to prevent both AI system misuse and theft of sensitive data to effectively create systems that are secure by design. Specifically, the guidance on Deploying AI Systems Securely suggests best practices for deploying and using externally developed AI systems and aims to:
There are three overarching best practices recommended by the international guidance on secure AI systems:
While the joint guidelines are voluntary, CISA encourages that this be adapted as necessary and applied by all institutions that deploy or use externally developed AI systems.
However, the guidance on Deploying AI Systems Securely is not applicable to organizations that do not deploy AI systems themselves and instead leverage AI systems deployed by others.
The joint statement highlights the growing importance that governments are placing on making AI systems safer, as well as the push towards international cooperation towards trust in AI. This joint statement comes on the heels of one released by US federal agencies on the use of automated systems, which reinforced the applicability of existing laws to automated systems and the importance of ensuring that the development of such systems happens in accordance with these laws.
Compliance is vital to uphold trust and innovate with AI safely. To find out how Holistic AI can help you get your algorithms legally compliant, get in touch at we@holisticai.com.