AI Regulation in the Public Sector: Regulating Governments’ Use of AI
AI Regulations

AI Regulation in the Public Sector: Regulating Governments’ Use of AI

March 6, 2023

Artificial intelligence (AI) use has grown rapidly in the last few years, with 44% of businesses taking steps to integrate it into their current processes and applications. However, while AI can offer many business benefits, such as increased productivity, accuracy, and cost savings, using AI comes with risks. Consequently, steps must be taken to reduce these risks and promote AI's safe and trustworthy use. An effective way to do this is to introduce governance mechanisms or codify risk management requirements in the law. Accordingly, policymakers worldwide have begun to propose regulations to make AI systems safer for those using them.

While many of these efforts target AI applications by businesses, governments are also starting to use AI more widely, with almost 150 significant federal departments, agencies, and sub-agencies in the US government using AI to support their activities. As such, governmental use of AI is also starting to be targeted, with initiatives to govern the use of AI in the public sector increasingly being proposed. In this blog post, we provide a high-level summary of some of the actions taken to regulate the use of AI in public law, focusing on the US, UK, and EU, first outlining the different ways governments use AI.

How are governments using AI?

AI is increasingly being used by governmental departments and agencies, and other entities in the public sector to automate a variety of tasks, from virtual assistant bots to deliver reminders about pregnancy checkups to mapping the characteristics of businesses in different areas to direct investments towards more ventures that are likely to be more successful. Elsewhere, AI is being used in defence activities to enhance decision-making, increase safety, and predict supply and demand, with the US Department of Defense publishing an AI strategy to accelerate the applications of AI in the military and the US Defence Advanced Research Projects Agency (DARPA) funding a program to develop a brain-to-machine interface.

However, highlighting the potential harms that can come from the use of AI in the public sector, the UK’s Office of Qualifications and Examinations Regulation (Ofqual) came under fire in 2020 for its algorithm used to assign GCSE grades while students were unable to take exams due to COVID restrictions since many students received lower grades than expected. Further, an already controversial application of AI, facial recognition, is being used by law enforcement to identify suspects and has recently garnered much attention due to the wrongful arrest of a man in Georgia who was mistaken for a fugitive by Louisiana authorities’ racial recognition technology. With the Gender Shades project revealing the inaccuracies of facial recognition technology for darker-skinned individuals and both the victim and fugitive being Black, this highlights the need to ensure that AI systems, particularly those used in high-risk contexts, are not biased and are accurate for all subgroups. As such, the UK’s Equality and Human Rights Commission has called for suspending facial recognition in policing in England and Wales, with similar action being taken in Washington’s city of Bellingham and Alabama.

Regulation Region Brief summary
Declaration on Responsible Military Use of Artificial Intelligence and Autonomy US Best practices for states using AI and automation in their military practices.
AI Training Act US Requires the Director of the Office of Management and Budget to develop an AI training program for the acquisition workforce.
Executive Order (EO) 13960 US The EO sets out a series of principles that federal agencies must be guided by when considering the design, development, acquisition, and use of AI in Government.
Maryland Algorithmic Decision System Procurement and Discriminatory Act US Requires that if a state unit purchases a product or service that includes an algorithmic decision system, then it must adhere to responsible AI standards.
Guidance on building and using AI in the public sector UK Provides resources on how to assess if using AI will help to achieve user needs, how AI can best be used in the public sector, and how to implement AI ethically, fairly, and safely.
Guidelines for AI procurement UK Outlines guiding principles on how government departments should buy AI technology and insights on tackling any challenges that may arise during procurement.
Algorithm Registers in the Netherlands EU AI Applications can be filtered by government branch and the database provides detail on the type of algorithm being used, whether it is currently actively used, and the policy area it is used for.
Italy’s White Paper on AI in Public Administration EU The Italian government published a report addressing various methods of adopting AI technology into public policies.

US efforts to regulate AI in the public sector

Given that AI is increasingly being used in high-stakes applications in the public sector and several instances of harm have resulted from this, efforts are emerging to govern and regulate public sector applications of AI, with many being centred in the US.  

Declaration on Responsible AI in the Military

Most recently, the US Department of State published a declaration on the Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, which outlines 12 best practices for states using AI and automation in their military practices. These include maintaining human control, using auditable methodologies and design considerations, rigorous testing and assurance across the AI life cycle, and sufficient training for the personnel approving or using military AI capabilities.

AI Training Act

On the note of personnel training, the US has launched an imitative specifically targeting the training of federal agency personnel acquiring AI. Signed into law in October 2022, the AI training Act (Public Law No. 117-207) requires the Director of the Office of Management and Budget to develop an AI training program for the acquisition workforce. Specifically, this program will be designed for employees of an executive agency who are responsible for program management; planning, research, development, engineering, testing, and evaluation of systems; procurement and contraction; logistics; or cost estimation of AI to ensure that such personnel know the risks and capabilities of the AI systems they are responsible for procuring. Taking a risk-management approach, the topics to be covered by the training include the science of AI and how it works, technological features of AI systems, how AI can benefit the federal government, AI risks, including discrimination and privacy risks, methods to mitigate risks including ensuring that AI is safe, reliable, and trustworthy, and future trends in AI.

Executive Order (EO) 13960: Trustworthy AI in the Federal Government

This effort builds Executive Order (EO) 13960, Promoting the Use of Trustworthy AI in the Federal Government, signed into law in December 2020. The EO sets out a series of principles that federal agencies must be guided by when considering the design, development, acquisition, and use of AI in Government:

  • Lawful and respectful - Agency use of AI should respect the Nation’s values and comply with relevant laws and policies.
  • Purposeful and performance-driven – agencies should seek opportunities for AI-led innovation but only when the benefits of using AI outweigh the risks and can be appropriately managed.
  • Accurate, reliable, and effective – AI applications should be consistent with the use cases for which the AI was trained, where AI should be accurate, reliable, and effective.
  • Safe, secure, and resilient – AI systems should be resilient against systematic vulnerabilities, adversarial manipulation, and other malicious intents.
  • Understandable – the operations and outcomes of AI applications should be understandable to subject matter experts, users, and others as appropriate.
  • Responsible and traceable – human roles and responsibilities should be clearly defined, understood, and designed for the design, development and acquisition of AI and the design, development, acquisition, and use of AI, as well as inputs and outputs, should be documented and traceable.
  • Regularly monitored – AI applications should be regularly tested against the principles, and mechanisms should be in place to supersede, disengage, or deactivate AI systems that do not perform consistently with their intended use.
  • Transparent – Agencies should be transparent in disclosing relevant information about their use of AI to appropriate stakeholders, including Congress and the public, keeping in mind applicable laws and policies.
  • Accountable – agencies should be accountable for implementing and enforcing appropriate safeguards for the appropriate use and functioning of their AI and shall monitor, audit, and document their compliance with safeguards.  

As part of this Executive Order, the National Institute for Standards and Technology (NIST) will re-evaluate and assess AI used by federal agencies to investigate compliance with these principles. In preparation, the US Department of Health and Human Services has already created its inventory of AI use cases.

Use of AI by New York government agencies report

At a more local level, and using different terminology, a report by the New York City Automated Decision Systems (ADS) Task Force in November 2019. Convened by Mayor Bill de Blasio in 2018 as part of Local Law 49, which required the Task Force to provide recommendations on six topics related to the use of ADSs by City agencies, the Task Force examined three key areas as part of their report:

  • How to build capacity for an equitable, effective, and responsible approach to using ADSs
  • How to broaden public discussions on ADSs
  • How to formalise ADS management functions

Recommendations included establishing an Organizational Structure within City government to act as a centralised resource to guide agency management of ADSs, including the inclusion of principles such as fairness and transparency, providing sufficient funding and training to agencies to support the appropriate use of ADSs, staff education and training, support for public requests for information about City use of ADSs, establishing a framework for agency reporting of information about ADSs, and creating a process for assessing ADS risks.

Following this report, Mayor de Blasio signed Executive Order 50 to establish an Algorithms Management and Policy Officer within the Mayor’s Office of Operations. The aim of this was to establish a centralised resource on algorithm policy and develop guidelines and best practices to assist City agencies using algorithms.

Maryland Algorithmic Decision System Procurement and Discriminatory Act

In Maryland, the Algorithmic Decision Systems Procurement and Discriminatory Act was proposed in February 2021 to require that if a state unit purchases a product or service that includes an algorithmic decision system, it must adhere to responsible AI standards. They must also evaluate the system's impact and potential risks, paying particular attention to potential discrimination. Further, state units must ensure the system adheres to transparency commitments, including disclosing the system's capabilities, limitations, and potential problems to the state.

UK efforts to regulate AI in the public sector

Guidance on building and using AI in the public sector

While the UK has not introduced any laws regulating public sector use of AI, reflecting the lack of more general AI-specific legislation in the UK, the Central Digital and Data Office and Office for Artificial Intelligence published guidance on building and using AI in the public sector on 10 June 2019. While brief, the guidance provides resources on assessing whether using AI will help achieve user needs, how AI can best be used in the public sector, and how to implement AI ethically, fairly, and safely.

Citing guidance from the Government Digital Service (GDS) and Office for Artificial Intelligence (OAI), the publication provides four resources on assessing, planning, and managing AI in the public sector. The publication then provides a resource on using AI ethically and safely, co-developed with the Turing institute, before providing a series of case studies on how AI is being applied in the public sector, from satellite images being used to estimate populations to using AI to compare prison reports. Therefore, instead of comprehensive guidance principles being outlined, which is more characteristic of the US approach, the UK guidance acts as a resource bank.

Guidelines for AI procurement

With a more comprehensive approach, the Guidelines for AI procurement, co-published by the Department for Business, Energy & Industrial Strategy, Department for Digital, Culture, Media & Sport, and Office for Artificial Intelligence in June 2020 is aimed at central government departments that are considering the suitability of AI technology. Specifically, the document outlines guiding principles on how government departments should buy AI technology and insights on tackling any challenges that may arise during procurement.

Initiated by the World Economic Forum’s Unlocking Public Sector AI project, the guidelines were produced with insights from the World Economic Forum Centre for the Fourth Industrial Revolution and other government bodies and industry and academic stakeholders.

  1. The guidance starts by outlining ten key things that the central government should consider concerning AI procurement:
  1. Ensure Technology and Data strategies are updated to incorporate AI technology adoption and act strategically to support AI adoption across the government.
  1. Seek multidisciplinary insights from diverse teams, including data ethicists and domain experts.
  1. Conduct data assessments before commencing procurement processes.
  1. Assess AI risks and benefits before procurement and deployment.
  1. Engage early with the market and consult various suppliers.
  1. Remain flexible and focus on the challenge rather than a particular solution.
  1. Establish appropriate oversight mechanisms to support the scrutiny of AI systems throughout their lifecycle.
  1. Encourage explainability and transparency by avoiding black box models where possible.
  1. Focus on the need to address the technical and ethical limitations of AI.
  1. Consider how AI systems can be managed throughout their lifecycle.

The guidelines then address AI-specific considerations within the procurement process concerning preparation and planning; publication; selection, evaluation and reward; and contract implementation and ongoing management.

EU efforts to regulate AI in the public sector

While much of the European Commission’s resources are currently invested in the development of the EU AI Act, and the EU is focusing more on businesses using AI, individual member states are introducing their own initiatives to address government use of AI.

Algorithm Registers in the Netherlands

For example, in the Netherlands, the Dutch Secretary of the State of Digital Affairs announced the launch of an Algorithm Registry in 2022.

Here, the AI applications currently being used by the Dutch government are listed, with 109 registries currently. Applications can be filtered by government branch, and the database provides detail on the type of algorithm being used, whether it is currently actively used, and the policy area it is used for. Information about monitoring, human intervention, risks, and performance standards are also provided, increasing transparency of AI usage by the Dutch government.

At a more local level, the City of Amsterdam and Helsinki launched an Algorithm and AI register in September 2020. Providing information about the three algorithms used in the City of Amsterdam, the register provides an overview of each system and contact information for the department responsible, along with information on the data, data processing, non-discrimination approach, human oversight, and risk management associated with the system.

Italy’s White Paper on AI in Public Administration

Elsewhere, in Italy, a Task Force on Artificial Intelligence was established as part of the agency for Digital Italy to develop Italy’s strategy for AI. In March 2018, the Italian government published a report, edited by Task Force, addressing various methods of adopting AI technology into public policies. This report, referred to as the White Paper, discussed and identified nine challenges to be addressed in the country’s National AI Strategy:

  • Ethics – agencies should consider the effects that AI innovation has had and will continue to have in terms of societal impact and safeguarding values.
  • Technology – AI in the Public Sector should be personalised and adaptive to create services capable of catering to the needs of citizens.
  • Skills - Individuals should be trained on AI issues for both work-related and educational benefits
  • Role of Data – AI needs to be able to transform public data into widespread and shared knowledge in both a transparent and accessible way.
  • Legal context – legal liability for AI should be established, and the characteristics of AI solutions and systems should be defined and interpreted in accordance with the fundamental rights of individuals and the laws in effect.
  • Accompanying the transformation - there needs to be room for both ordinary citizens and Public Administrations to participate in the development/creation of AI systems.
  • Preventing inequalities - AI can reduce inequalities in different public law sectors. Still, one also needs to be mindful of the inequalities AI can create, e.g., race, gender and other social factors.
  • Measuring the impact – the impact of AI should be measured through both qualitative and quantitative indicators, for example, exploring which professions/roles will be replaced by technology.
  • The Human Being - there needs to be consideration of the real-life effects of AI on human beings concerning concerns such as their rights, freedoms, and opportunities.

In order to address these challenges, the report gives 10 recommendations:

  1. Promote a national platform dedicated to AI development, including capabilities to collect annotated data, codes, and learning modules
  1. Make public appropriate documentation on the AI systems operated by public administrators so that processes can be reproduced and they can be evaluated and verified
  1. Enable computational linguistic systems for the Italian language using new resources distributed with open licenses
  1. Develop adaptative personalisation and recommender systems to facilitate interaction with the services offered by public administration systems based on specific needs, requirements, and characteristics of citizens
  1. Promote the creation of a National Competence Centre to act as a point of reference for the implementation of AI in public administration to enhance the positive effects of AI systems and reduce their risks
  1. Facilitate the dissemination of skills by promoting training, education, and certification
  1. Provide a plan to encourage investments in AI in public administration through promoting innovation
  1. Support the collaboration between research, business accelerators, and innovation hubs to promote the adoption of AI solutions in the public sector
  1. Establish a transdisciplinary Center on AI in collaboration with the Center of Expertise to publicise debates on AI ethics and create opportunities for expert and citizen consultation
  1. Define guidelines and processes based on the principle of security-by-design and facilitate the sharing of data on cyber-attacks to AI across Europe

Get compliant

Governments and businesses alike will soon be faced with several requirements and principles that they must follow when designing, developing, deploying, and procuring AI systems. Taking action early is the best way to ensure compliance. To find out more about how Holistic AI can help you with this, get in touch at we@holisticai.com.

Written by Airlie Hilliard, Senior Researcher at Holistic AI, and Imani Wilson, Legal Research Intern at Holistic AI

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Manage risks. Embrace AI.

Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AI

Get Started