The landscape of artificial intelligence (AI) adoption, its benefits and implications has been taken over by a discourse focused on generative AI and large-language models (LLMs) such as ChatGPT and Bard. Publications such as Time Magazine and the New York Times have referred to this as “AI Arms Race” which is misleading as a real AI Arms Race also very much exists.
The integration of AI in full military capabilities has been reported and acknowledged throughout the world – in both government and potentially rogue military bodies.
In this article, we begin by looking at the US’s latest development in promoting the adoption of responsible AI in the military, before briefly discussing investments into the military across the US and China, as well as the implications of AI in military capabilities on a macro-level, making a case for the growing importance of risk management and auditable methodologies.
On 15-16 February 2023, the first global summit on Responsible Artificial Intelligence in the Military Domain (REAIM) was held in the Netherlands. The US used the summit as an opportunity to put forth their “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.”
The declaration interprets AI as: the ability of machines to perform tasks that would otherwise require human intelligence […]. It also interprets autonomy as: to involve a system operating without further human intervention after activation.
There is an emphasis on considering both the risks and benefits with an explicit mention of aiming to minimise “unintended bias and accidents.” This speaks to a broader eco-system in the US, wherein there has been substantial focus on bias in AI (specifically algorithmic discrimination) and the negative implications this can have for the life opportunities of individuals. For example, in an ongoing lawsuit against insurance company State Farm due to allegations that their automated claims processing has resulted in algorithmic bias against black homeowners. In another recent case, Louisiana’s authorities came under fire when the use of facial recognition technology led to a mistaken arrest and an innocent man was jailed for a week. In the context of the military, bias can lead to members of vulnerable groups such as migrants being oversubscribed to automated methods of surveillance, exclusion, and criminalisation. It could also lead to an increase in wrongful profiling by defence departments.
Finally, the declaration puts forth a list of best practices for responsible AI in the military domain, which we paraphrase below:
As of 20 February 2023, the US and China have signed the declaration along with 60 other countries. The endorsing states are now expected to be:
The declaration echoes the AI RMF by making it clear that safeguards to mitigate risks can come from other AI risk management best practises – not just those developed for AI military capabilities. Pointing to a larger eco-system where AI across uses should be developed and deployed in the context of risk management and assurance practices.
In addition to this declaration, there has recently been an overall cultural shift to encourage responsible AI practices within both legal frameworks and voluntary standards, such as NIST’s AI Risk Management Framework 1.0 (AI RMF).
However, the promotion of ethical principles and responsible AI in the military is not novel.
On 21 February 2020, the US Department of Defense (DOD) became the first military department in the world to adopt ethical principles for all its AI capabilities, including AI-enabled autonomous systems, for warfighting and business applications.
The Ethical Principles for Artificial Intelligence aims to facilitate the Department’s lead in AI ethics and the lawful use of AI systems under both combat and non-combat functions. The following are the five principles and explanations provided by DOD’s Ethical Principles for Artificial Intelligence:
These principles aim to address the gaps within the existing military policy and legal framework introduced by the use of AI. To implement these Ethical Principles, on 26 May 2021, Kathleen H. Hicks, the Deputy Secretary of Defense, announced the Department’s approach to Responsible Artificial Intelligence with the following actionable tenets:
Finally, in June 2022, DOD published its Responsible Artificial Intelligence Strategy and Implementation Pathway which aims to facilitate its implementation of Responsible Artificial Intelligence for both combat and non-combat operations.
The Department’s approach to Responsible Artificial Intelligence largely focuses on supplementing existing laws, regulations, and norms to address novel issues stemming from AI, taking a keen interest in reliability, risk management, and ethics. These efforts may also signify the DOD’s determination to capitalise on its first-mover opportunity to set norms on the military use of AI globally, likely influencing other nations to adopt similar frameworks.
The DOD’s Responsible Artificial Intelligence efforts follow closely with the US government’s general approach to using Trustworthy Artificial Intelligence. For example, the Office of the Director of National Intelligence’s Principles of Artificial Intelligence Ethics for the Intelligence Community and its Artificial Intelligence Ethics Framework for the Intelligence Community contains comparable principles and approaches to ethically operating AI technologies. Also, Executive Order 13,960 adopts similar principles in the US government for all non-defence and non-national security purposes. Overall, the US military approach to Responsible Artificial Intelligence reflects the nation’s commitment to harmonise AI policy across all sectors.
Written by Ashyana-Jasmine Kachra Public Policy Associate at Holistic AI and Daniel Shin Cybersecurity Researcher at William & Mary Law School’s Center for Legal and Court Technology.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our automated AI Risk Management platform empowers your enterprise to confidently embrace AIGet Started