California’s Privacy Protection Agency Releases Draft Rules for Automated Decision Technologies

December 1, 2023
Authored by
Siddhant Chatterjee
Public Policy Strategist at Holistic AI
California’s Privacy Protection Agency Releases Draft Rules for Automated Decision Technologies

On 27 November 2023, California’s Privacy Protection Agency (CPPA) issued draft regulations on the use of Automated Decision-making Technologies (ADTs). Once adopted, these rules will join the CCPA’s ever-increasing remit of privacy regulations that seek to enable and enhance accountability, explainability and transparency in automated decision making.

While the Agency has not initiated the formal rulemaking process for ADTs, the current iteration of the draft rules has been published for public consultation and is subject to change. These rules will be introduced to the CCPA’s Board and subsequently be discussed and deliberated upon on 8 December, after which rulemaking processes will commence.

Defining Automated Decision Technologies

Under these rules, an ADT has been defined as any “system, software, or process—including one derived from machine-learning, statistics, or other data-processing or artificial intelligence—that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decision-making.”

This definition is also seen to include algorithms with individual profiling capabilities. Here, profiling refers to the processing of personal information to evaluate an individual, including their performance at work, economic situation, health, preferences, interests, reliability, behaviour, and location and movements.

Enforcing consumer rights and transparency

There are three overarching elements under these rules, which focus on:

  1. Right to Access Information: Consumers have been given the right to inquire of businesses the nature of automated decision-making technologies employed and the methodologies by which decisions impacting them were developed. If and when such a request has been made, businesses will be required to provide comprehensive information pertaining to the underlying logic of the system and the conceivable spectrum of outcomes, as well as the extent to which human decision-making played a role in determining the automated decision. Crucially, the dissemination of such information is contingent upon the verified identity of the requesting customer; requests lacking identity verification may be subject to rejection.
  1. Pre-Use Notifications: The proposed rules mandate businesses using personal information in ADTs to transparently disclose to consumers the usage of their information, and the right to opt out. In cases where the initial notice is deemed insufficient for consumers, additional information must be made available through a hyperlink, which should offer a detailed explanation of the significance of the information to the business's systems.
  1. Opt-out Modalities for Consumers: The rules also establish the contours under which consumers can choose to opt-out of ADTs, which include performance evaluations in education and the workplace, job applications, legal determinations, and from the use of technologies like facial recognition and biometric-profiling in public places.

This development strongly signals California’s intentions to pioneer and lead the regulatory discourse on Artificial Intelligence at the state-level, having proposed multiple initiatives over the course of this year alone. Indeed, these include draft legislations such as Assembly Bill 331, which seeks to prohibit the use of automated decision tools that result in algorithmic discrimination, Assembly Bill 302, which seeks to establish dedicated regulatory oversight on ADTs, Senate Bill 313, which seeks to regulate the use of AI by State Agencies, as well the recent Executive Order by Governor Newson that lays out a strategic plan for how California will approach the progress and proliferation of generative AI.

Regulation is coming, get ready

California is by no means the only state in the US taking decisive action to make AI safer and fairer. While much of this activity has targeted HR Tech, other sectors such as insurance, online safety and generative AI are also receiving a lot of attention, and those developing and deploying AI systems will soon face stringent requirements. Acting early is the best way to prepare and remain compliant. Schedule a call with our experts to find out more about how Holistic AI can help you navigate both existing and upcoming regulations.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call