From the Golden State to the heart of Europe, artificial intelligence (AI) regulation is growing significantly. In the last few years, increasing awareness of AI risks and harms has prompted governments to consider AI regulations, policies, and strategies to manage them. As a result, a growing consensus is emerging in favour of risk-based governance of AI, centred around assessing AI risks and enabling stakeholders to respond with practical and proportionate control measures.
In the EU, the proposed AI Act aims to position the region as the global leader in AI regulation that establishes the “gold standard” for protecting society and managing risks, following their dominant approach with the GDPR. Narrowed in focus, California has proposed amendments to its employment regulations to extend non-discriminatory practices to automated- decision systems (ADS) to address bias and discrimination in hiring and employment practice. Similarly, California’s Workplace Technology and Accountability Act seeks to regulate the day-to-day use of automated tools in the workplace. This blog post compares California’s proposed regulations to the EU’s AI Act.
On the 21st of April 2021, the European Commission proposed harmonised rules on Artificial Intelligence, the first law worldwide to regulate AI development and use. The EU AI Act takes a risk-based approach to AI risk management, where systems are classified into four categories.
Systems with minimal risk, including spam filters or AI-enabled video games, comprise the majority of the systems currently being used on the market and do not have any obligations. Systems with limited risk those that i) interact with humans, ii) detect humans or determine a person’s categorisation based on biometric data, or iii) produce manipulated content. These systems include chatbots and those used to produce deep fakes and have transparency requirements.
High-risk systems are those that can have a significant impact on the life chances of a user, and have the most obligations. The most recent compromise text outlines eight types of systems that fall into this category:
i) biometric identification systems used for real-time and ‘post’ remote identification of people without their agreement
ii) systems for critical infrastructure and protection of the environment, including those used to manage pollution
iii) education and vocational training systems used to evaluate or influence the learning process of individuals
iv) systems influencing employment, talent management and access to self-employment
v) systems affecting access and use of private and public services and benefits, including those used in insurance
vi) systems used in law enforcement
vii) systems to manage migration, asylum and border control, including systems used on behalf of the relevant public authority
vii) systems used in the administration of justice and democratic processes, including systems used on behalf of the judicial authority.
Systems with unacceptable levels of risk are prohibited from being made available on the EU market, and include systems that deploy subliminal techniques or exploit vulnerabilities of specific groups, systems used for social scoring, and real-time biometric identification systems by law enforcement in public places.
Under the Act, providers of AI systems established in the EU must comply with the regulation, along with those in third countries that place AI systems on the market in the EU and those in the EU that use AI systems. Additionally, the Act will apply to providers and users based in third countries if the system’s output is used within the EU.
The risk management obligations of users of the systems are dependent on the risk of the system:
Before high-risk systems can be put on the EU market, they must undergo conformity assessments to meet legal obligations. Users will also have to establish AI risk management processes. Systems compliant with the Act and pass the conformity assessment must bear the CE logo and be registered on an EU database before they can be placed on the market. Following any significant changes to the system, such as if the model is retrained on new data or some features are removed from the model, the system must undergo additional conformity assessments to ensure that the requirements are still being met before being re-certified and re-registered in the database. Failure to comply could cost companies an estimated €200,000 - €400,000.
A history of strict data privacy and protection laws is at the core of AI regulation in California. Pending updates to state employment law regulate the use of algorithms and their capacity to discriminate against protected groups. Under the proposed amendments, any employer or third-party vendor that buys, sells, uses, or administers automated-decision systems (ADSs) or employment screening tools that automate decision-making is subject to compliance with the legislation. It is therefore prohibited from using ADSs that discriminate based on protected characteristics. The categories protected under the legislation include race, national origin, gender, accent, English proficiency, immigration status, driver’s license status, citizenship, height or weight, national origin, sex, pregnancy or perceived pregnancy, religion, and age unless they are shown to be job-related for the position in question and are consistent with business necessity.
Alternatively, the proposed Workplace Technology Accountability Act mandates specific risk management requirements; algorithmic impact assessments and data protection impact assessments are required for automated decision tools and worker information systems to identify risks such as discrimination or bias, errors, and violations of legal rights.
In addition, under the Act, workers have the right to request and correct any information an employer is collecting, storing, analysing or interpreting about them. The legislation, therefore, states that employers are prohibited from processing data about employees unless the data are strictly necessary for an essential job function.
Overwhelmingly, both of California’s proposed laws are narrowly focused mainly on automated employment decision tools used in recruiting, hiring, promotion and work monitoring. While the movement has gained traction in regulating AI systems used in hiring and employment-related decisions, the EU AI Act is far more expansive, taking a sector-agnostic approach and banning certain unacceptable technologies, such as social scoring.
Separately, the European Commission has taken the opportunity to require conformity assessments for high-risk systems. This approach to regulation departs from other national strategies by introducing a mandatory CE- marking procedure with a layered approach to enforcement. Like the conformity assessments required by the EU AI Act, the Workplace Technology Accountability Act requires data protection impact assessments of worker information systems and algorithmic impact assessments of automated decision systems, which can help ensure compliance with the legislation requirements and inform risk management strategies.
Both Acts also require ongoing monitoring, are re-evaluation when significant changes are made to the system. However, a critical difference between the assessments required by these acts is that nothing in the EU AI Act specifies that third parties must carry out conformity assessments. In contrast, Californian impact assessments must be carried out by a third party with the relevant experience and expertise.
Similarly, California and the EU have strict notification obligations, placing employee rights of action at the top of mind. For example, under EU requirements, the law mandates that people be notified when they encounter biometric recognition systems or AI applications that claim to be able to read their emotions. Taking a slight departure but aligned nonetheless, California compels employers to notify workers when electric monitoring of automated systems occurs in the workplace, only permitted upon job necessity.
AI’s fast-evolving and dynamic nature necessitate a forward-looking approach to anticipate and prepare for an uncertain future. Given AI’s rapid development and increased applications, AI risks are constantly changing, further complicating AI risk assessment and mitigation efforts. Left unchecked, biased, or inaccurate algorithmic decision-making can perpetuate existing structures of inequality and lead to discrimination causing severe ethical, legal, and reputational harm.
While California has taken a less prescriptive approach by narrowly focusing on AI regulation within the context of employment law, the EU AI Act is set to lead the way in responsible AI governance with its extra-territorial scope. Holistic AI’s risk management platform can help enterprises catalogue their AI systems, identify risks, and recommend steps to mitigate them. The platform operates based on five risk verticals, which cover all of the obligations of high-risk systems under the EU AI Act:
Once the conformity assessment has been completed on the Holistic AI platform, we provide compliance certification. Additionally, our platform can continually monitor a system after deployment and re-issue certification following any significant changes to the system.
To find out more, get in touch with a member of our team at we@holisticai.com.
Written by Ayesha Gulley, Public Policy Associate at Holistic AI & Airlie Hilliard, Senior Researcher at Holistic AI.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AI
Get Started