An Overview of the US AI Training Act 2022

December 19, 2022
Authored by
Ashyana-Jasmine Kachra
Policy Associate at Holistic AI
An Overview of the US AI Training Act 2022

Key takeaways

  • The Artificial Intelligence Training for the Acquisition Workforce Act (AI Training Act) was signed into law by President Biden in October 2022.
  • The AI Workforce Training Act is premised on education and training to inform procurement and facilitate the adoption of AI for services at the Federal Level.

Training & education to inform procurement and facilitate adoption AI at the federal level

  • Signed into law by President Biden, The AI Training Act takes a risk management approach towards federal agency procurement of AI.
  • The Act aims to set best practices in place to educate those tasked with procurement, logistics, project management, etc., about AI, its uses, risks, and key considerations among others.
  • This is so that AI is then purchased/procured from an educated and informed perspective, as well as to explore opportunities for federal agencies to use/implement AI.
  • This bill requires the Office of Management and Budget (OMB) to either create or provide an artificial intelligence (AI) training program to aid in the informed acquisition of AI by federal executive agencies.
  • The main purpose of the training program would be to ensure those responsible for procuring AI within a covered workforce, are aware of both the capabilities and risks associated with AI and similar technologies.

US AI Training bill text

The bill text has outlined the following topics to be covered in the program:

  1. The science underlying AI, including how AI works.
  2. Introductory concepts relating to the technological features of artificial intelligence systems.
  3. The ways in which AI can benefit the Federal Government.
  4. The risks posed by AI, including discrimination and risks to privacy.
  5. Including efforts to create and identify AI that is reliable, safe, and trustworthy; and
  6. Future trends in AI, including trends for homeland and national security and innovation.

Outside of the bill text itself, Senator Peters explained the training program is needed and will be instrumental in “training our federal workforce to better understand this technology and ensure that it is used ethically and in a way that is consistent with our nation's values.” Particularly on the verticals of privacy and discrimination.

To build this training program the bill encourages the Director of the OMB to consult with the following: technologists, scholars, and other private and public sector experts. The bill is subject to a 10-year sunset clause, within these 10 years this training program is expected to be updated at least every 2 years.

Continuing a national commitment to trustworthy AI

The AI Training Act is not the first national initiative aimed at guiding government agency use of AI.

The Executive Order (EO) 13960, Promoting the Use of Trustworthy AI in the Federal Government, was signed into law in December 2020. The order set out that principles would be developed to guide the federal use of AI within different agencies, outside of national security and defense.

These principles refer to being in line with American values and applicable laws. The EO also requires that agencies make public an inventory of non-classified and non-sensitive current and planned Artificial Intelligence (AI) use cases.

In 2023, NIST will re-evaluate and assess any AI that has been deployed or is in use by federal agencies to ensure consistence with the policies outlined in the order. The US Department of Health and Human Services has already created their inventory of use-cases in preparation for NIST’s evaluation and their inventory list can be found here.

The Whitehouse also recently published a Blueprint for an AI Bill of Rights to guide the design, deployment, and development of AI systems. The Blueprint is nonbinding and relies on designers, developers and deployers to voluntarily apply the framework to protect US Americans from the harms that can result from the use of AI.

The US is taking decisive action to manage the risks of artificial intelligence at federal, state and local agency levels. Taking steps to address the risks of AI early is the best way to get ahead of these upcoming regulations.

Holistic AI has pioneered the field of AI Risk Management and empowers enterprises to adopt and scale AI confidently. Our team has the technical expertise needed to identify and mitigate risks, and our policy experts use that knowledge of and act on proposed regulations to inform our product. Get in touch with a team member or schedule a demo to find out how we can help you comply with these legislative requirements.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call