On 6 September 2023, Governor Gavin Newsom from the State of California issued an executive order on artificial intelligence (AI), laying out a strategic plan for how California will approach the progress and proliferation of generative AI.
As a trailblazer in the generative AI space from research, development and innovation to human capital and entrepreneurship, California intends to continue leading and promoting the responsible design, development, integration and management of emerging technologies.
The order highlights the need to balance the power of generative AI technologies to enhance human potential with the risks such technologies pose, such as bias, bioterrorism, cyberattacks, and disinformation, among other malicious uses.
A call for united governance is made in the executive order, and a timeline of requirements from now until at least January 2025 is also outlined.
Examination of use cases
Within 60 days of order issuance, multiple state agencies, departments and their workforces must produce a report determining the “most significant, potentially beneficial use cases” for the implementation and integration of generative AI tools by the state.
Potential risks across society must also be addressed, focusing on high-risk use cases in which decisions critical to the access of essential goods and services are made.
The report – which should be authored in consultation with industry experts, state government, academia and civil society – must also be assessed and updated where appropriate.
- No later than March 2024, multiple state agencies dedicated to cybersecurity and threat assessment are expected to complete risk analysis detailing “potential threats and vulnerabilities of California’s critical energy infrastructure” related to generative AI.
- Results will be presented in a classified briefing, and unclassified information will be opened to the public for recommendations on future actions.
- Recommendations shall address how to ensure effective human control.
Requirements for state agencies
To encourage a safe and responsible innovation ecosystem, the executive order outlines the following requirements for multiple state government agencies and departments, most to be fulfilled in consultation with industry experts, academia, state government workforce and civil society (including historically vulnerable and marginalised communities):
- Within 60 days of order issuance: conduct and submit an inventory of all current high-risk uses of generative AI to the California Department of Technology.
- By January 2024: building from the White House’s Blueprint for an AI Bill of Rights and NIST’s AI Risk Management Framework, issue general guidelines for the public sector procurement, uses, and required training for the use of generative AI.
- By March 2024: establish the infrastructure to conduct pilots on generative AI projects, including “sandboxes” to test such projects.
- By July 2024: develop guidelines for state agencies and departments to analyse the impact that the adoption of generative AI tools may have on vulnerable communities.
- Also by July 2024: consider pilot projects of generative AI applications and measure how generative AI can improve Californians' experience with and access to government services, including how the technology can support state employees to complete their duties.
- No later than July 2024: make available appropriate training for state government workers’ use of state-approved generative AI tools to achieve equitable outcomes.
- 2024 Summit: the University of California, Berkeley’s College of Computing, Data Science, and Society and Standford University’s Institute for Human-Centered AI must jointly develop and host a California-specific symposium to engage in meaningful discussions on generative AI and its impacts on Californians’ lives.
- By January 2025: update the state’s project approval, procurement and contract terms.
- No later than 1 January 2025: establish criteria to evaluate the impact of generative AI to the state government workforce, as well as provide guidelines on how state agencies and departments can support state employees’ effective use of such tools.
New AI legislation is emerging across the United States – be prepared with Holistic AI
Across the United States, AI regulation is evolving and expanding. This trend shows no signs of slowing down. In fact, it is expected to accelerate.
This shift has and will continue to usher in a new wave of compliance obligations for organisations using AI in their business.
Holistic AI are governance, risk, and compliance experts – and we can help your organisation adapt to the changing legislative landscape.
To learn more about us, our innovative platform and suite of solutions, schedule a call with one of our specialists.
Authored by Monica Lopez, Senior Policy Expert at Holistic AI.