The bill text has outlined the following topics to be covered in the program:
Outside of the bill text itself, Senator Peters explained the training program is needed and will be instrumental in “training our federal workforce to better understand this technology and ensure that it is used ethically and in a way that is consistent with our nation's values.” Particularly on the verticals of privacy and discrimination.
To build this training program the bill encourages the Director of the OMB to consult with the following: technologists, scholars, and other private and public sector experts. The bill is subject to a 10-year sunset clause, within these 10 years this training program is expected to be updated at least every 2 years.
The AI Training Act is not the first national initiative aimed at guiding government agency use of AI.
The Executive Order (EO) 13960, Promoting the Use of Trustworthy AI in the Federal Government, was signed into law in December 2020. The order set out that principles would be developed to guide the federal use of AI within different agencies, outside of national security and defense.
These principles refer to being in line with American values and applicable laws. The EO also requires that agencies make public an inventory of non-classified and non-sensitive current and planned Artificial Intelligence (AI) use cases.
In 2023, NIST will re-evaluate and assess any AI that has been deployed or is in use by federal agencies to ensure consistence with the policies outlined in the order. The US Department of Health and Human Services has already created their inventory of use-cases in preparation for NIST’s evaluation and their inventory list can be found here.
The Whitehouse also recently published a Blueprint for an AI Bill of Rights to guide the design, deployment, and development of AI systems. The Blueprint is nonbinding and relies on designers, developers and deployers to voluntarily apply the framework to protect US Americans from the harms that can result from the use of AI.
The US is taking decisive action to manage the risks of artificial intelligence at federal, state and local agency levels. Taking steps to address the risks of AI early is the best way to get ahead of these upcoming regulations.
Holistic AI has pioneered the field of AI Risk Management and empowers enterprises to adopt and scale AI confidently. Our team has the technical expertise needed to identify and mitigate risks, and our policy experts use that knowledge of and act on proposed regulations to inform our product. Get in touch with a team member or schedule a demo to find out how we can help you comply with these legislative requirements.
Written by Ashyana-Jasmine Kachra, Public Policy Intern at Holistic AI. Follow her on Linkedin.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AIGet Started