The US Pushing for Responsible AI in Military Use

March 13, 2023
Authored by
Ashyana-Jasmine Kachra
Policy Associate at Holistic AI
Daniel Shin
Cybersecurity Researcher at William & Mary Law School’s Center for Legal and Court Technology
The US Pushing for Responsible AI in Military Use

Key Takeaways

  • 16 Feb. 2023: US Department of State published “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.”
  • The declaration puts forth a list of 12 best practices for responsible AI in the military domain.
  • There is an emphasis on military AI capabilities being only used in respect to obligations under international law.
  • Focus on developing & deploying AI with auditable methodologies and avoiding unintended consequences (risk management).
  • Acknowledgement of both benefits and risks of AI; attention on mitigating bias.
  • Military AI capabilities should be subject to assurance processes throughout their lifecycles.
  • The declaration has been signed by 60 countries including the US and China.
  • Third-party auditing and assurance are critical for the responsible development and deployment of AI systems across industry and the military.
The Landscape of Artificial Intelligence (AI) Adoption

The landscape of artificial intelligence (AI) adoption, its benefits and implications has been taken over by a discourse focused on generative AI and large-language models (LLMs) such as ChatGPT and Bard. Publications such as Time Magazine and the New York Times have referred to this as “AI Arms Race” which is misleading as a real AI Arms Race also very much exists.

The integration of AI in full military capabilities has been reported and acknowledged throughout the world – in both government and potentially rogue military bodies.

In this article, we begin by looking at the US’s latest development in promoting the adoption of responsible AI in the military, before briefly discussing investments into the military across the US and China, as well as the implications of AI in military capabilities on a macro-level, making a case for the growing importance of risk management and auditable methodologies.

Responsible AI in the military domain

On 15-16 February 2023, the first global summit on Responsible Artificial Intelligence in the Military Domain (REAIM) was held in the Netherlands. The US used the summit as an opportunity to put forth their “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.”

The declaration interprets AI as: the ability of machines to perform tasks that would otherwise require human intelligence […]. It also interprets autonomy as: to involve a system operating without further human intervention after activation.

There is an emphasis on considering both the risks and benefits with an explicit mention of aiming to minimise “unintended bias and accidents.” This speaks to a broader eco-system in the US, wherein there has been substantial focus on bias in AI (specifically algorithmic discrimination) and the negative implications this can have for the life opportunities of individuals. For example, in an ongoing lawsuit against insurance company  State Farm due to allegations that their automated claims processing has resulted in algorithmic bias against black homeowners. In another recent case, Louisiana’s authorities came under fire when the use of facial recognition technology led to a mistaken arrest and an innocent man was jailed for a week. In the context of the military, bias can lead to members of vulnerable groups such as migrants being oversubscribed to automated methods of surveillance, exclusion, and criminalisation. It could also lead to an increase in wrongful profiling by defence departments.

Finally, the declaration puts forth a list of best practices for responsible AI in the military domain, which we paraphrase below:

  1. Legal reviews: ensuring alignment with respective obligations under international law.
  1. Human control: human control in particular concerning nuclear weapons employment.
  1. Human oversight: overseeing the development and deployment of all military AI capabilities with high-consequence applications, i.e., weapon systems.
  1. Governance: adopting, publishing, and implementing principles for the responsible AI development and use.
  1. Human judgement:  relevant personnel exercising appropriate care, including appropriate levels of human judgment.
  1. Mitigate bias: deliberate steps should be taken to minimisze unintended bias.
  1. Auditable methodologies:  military AI capabilities should be developed with auditable methodologies and documentation.
  1. Human training: personnel who use or approve the use of military AI capabilities are trained and can make context-informed judgments on their use.
  1. Clear use-cases:  having explicit, well-defined uses and fulfilling those intended functions.
  1. Assurance and monitoring: AI systems are subject to appropriate and rigorous testing and assurance within their well-defined uses and across entire life cycles.
  1. Risk management: designing and engineering military AI capabilities so that they possess the ability to avoid unintended consequences; implementing other appropriate safeguards to mitigate risks of serious failures.
  1. Continuous dialogue: continuing discussions on how military AI capabilities are developed, deployed, and used in a responsible manner.

As of 20 February 2023, the US and China have signed the declaration along with 60 other countries. The endorsing states are now expected to be:

  • implementing these practices;
  • publicly describing their commitment;
  • supporting other appropriate efforts to ensure that such capabilities are used responsibly and lawfully; and
  • engaging the rest of the international community to promote these practices,

The declaration echoes the AI RMF by making it clear that safeguards to mitigate risks can come from other AI risk management best practises – not just those developed for AI military capabilities. Pointing to a larger eco-system where AI across uses should be developed and deployed in the context of risk management and assurance practices.

In addition to this declaration, there has recently been an overall cultural shift to encourage responsible AI practices within both legal frameworks and voluntary standards, such as NIST’s AI Risk Management Framework 1.0 (AI RMF).

However, the promotion of ethical principles and responsible AI in the military is not novel.

On 21 February 2020, the US Department of Defense (DOD) became the first military department in the world to adopt ethical principles for all its AI capabilities, including AI-enabled autonomous systems, for warfighting and business applications.

The Ethical Principles for Artificial Intelligence aims to facilitate the Department’s lead in AI ethics and the lawful use of AI systems under both combat and non-combat functions. The following are the five principles and explanations provided by DOD’s Ethical Principles for Artificial Intelligence:

  • Responsible: DOD personnel exercising appropriate levels of judgment and care.
  • Equitable: DOD taking deliberate steps to minimize unintended bias.
  • Traceable:  being developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: having explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities being subject to testing and assurance within those defined uses across their entire life-cycles.
  • Governable: designing and engineering AI capabilities to fulfil their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behaviour.

These principles aim to address the gaps within the existing military policy and legal framework introduced by the use of AI. To implement these Ethical Principles, on 26 May 2021, Kathleen H. Hicks, the Deputy Secretary of Defense, announced the Department’s approach to Responsible Artificial Intelligence with the following actionable tenets:

  • Responsible Artificial Intelligence Governance: ensuring governance structure for oversight and accountability and set clear guidelines and policies on Responsible AI.
  • Warfighter Trust: ensuring trusted and trustworthy AI capabilities through education and training, testing and evaluation and verification and validation framework, algorithm confidence metrics, and user feedback.
  • Artificial Intelligence Product and Acquisition Lifecycle: synchronizing Responsible AI implementation throughout the acquisition lifecycle using system engineering and risk management approach.
  • Requirements Validation: ensuring Responsible AI’s inclusion in appropriate DOD artificial intelligence capabilities.
  • Responsible Artificial Intelligence Ecosystem: harnessing the Responsible AI Ecosystem to improve intergovernmental, academic, industry, and stakeholder collaboration, and advancing global norms grounded in shared values.
  • Artificial Intelligence Workforce: developing a Responsible AI workforce through education and training on Responsible AI.

Finally, in June 2022, DOD published its Responsible Artificial Intelligence Strategy and Implementation Pathway which aims to facilitate its implementation of Responsible Artificial Intelligence for both combat and non-combat operations.

The Department’s approach to Responsible Artificial Intelligence largely focuses on supplementing existing laws, regulations, and norms to address novel issues stemming from AI, taking a keen interest in reliability, risk management, and ethics. These efforts may also signify the DOD’s determination to capitalise on its first-mover opportunity to set norms on the military use of AI globally, likely influencing other nations to adopt similar frameworks.

The DOD’s Responsible Artificial Intelligence efforts follow closely with the US government’s general approach to using Trustworthy Artificial Intelligence. For example, the Office of the Director of National Intelligence’s Principles of Artificial Intelligence Ethics for the Intelligence Community and its Artificial Intelligence Ethics Framework for the Intelligence Community contains comparable principles and approaches to ethically operating AI technologies. Also, Executive Order 13,960 adopts similar principles in the US government for all non-defence and non-national security purposes. Overall, the US military approach to Responsible Artificial Intelligence reflects the nation’s commitment to harmonise AI policy across all sectors.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call