What is the EU AI Act?
AI Regulations

What is the EU AI Act?

January 5, 2023

First proposed on 21st of April 2021, the European Commission’s proposed Harmonised Rules on Artificial Intelligence, colloquially known as the EU AI Act, seeks to lead the world in AI regulation. Likely to become the global gold standard for AI regulation, much like the general data protection regulations did for privacy regulation, the rules aim to create an ‘ecosystem of trust’ that manages AI risk and prioritises human rights in the development and deployment of AI.

Since first being proposed, an extensive consultation process has resulted in a number of amendments being proposed to the rules in the form of compromise texts. Among those shaping this regulation are the European Council and the French Presidency, who have both published compromise texts. The latest and likely final version came from the Czech Presidency with their General Approach text published on the 25th of November.

Why has the EU AI Act been proposed?

While AI and the associated automation can offer many benefits, such as increased efficiency and accuracy, using AI also poses novel technical, legal, and ethical risks. Scandals have affected multiple high-risk applications of AI:

  • Amazon’s resume screening tool was retired before being deployed since it was biased against female candidates who included the word “women’s” (e.g. “women’s college”) in their resume.
  • Northpointe’s tool COMPAS, which was designed to predict recidivism, or the likelihood of a criminal reoffending, was also found to be biased. The tool predicted that black defendants were at higher risk of recidivism than they were, and were twice as likely to be misclassified as being high-risk for violent crimes than white defendants.
  • The algorithms used to determine car insurance premiums in the US gave residents of predominantly minority areas higher quotes than those living in non-minority areas with similar levels of risk.

While existing laws do somewhat apply to AI, such as GDPR, they alone are not sufficient to prevent AI from causing harm.

Who has to comply with the EU AI Act?

Under the Act, providers of AI systems established in the EU must comply with the regulation, along with those in third countries that place AI systems on the market in the EU, and those located in the EU that use AI systems. It also applies to providers and users based in third countries if the output of the system is used within the EU. Exempt from the regulation include those who use AI systems for military purposes and public authorities in third countries.

A risk-based approach

The regulation uses a risk-based approach, to determine both the obligations and penalties for different types of systems. Accordingly, systems are classified into having low or minimal risk, limited risk, high risk, or unacceptable risk.

• Low risk systems include spam filters or AI-enabled video games, and comprise the majority of the systems currently being used on the market. These systems do not have any obligations under the rules in their current form.

• Systems with limited risk are those that i) interact with humans, ii) detect humans or determine a person’s categorisation based on biometric data, or iii) produce manipulated content. These systems include chatbots and those used to produce deep fakes.

  • The obligations for these systems are related to transparency, where users must be informed that they are interacting with an AI system, that an AI system will be used to infer their characteristics or emotions, or that the content they are interacting with has been generated using AI.

• High-risk systems are ones that can have a significant impact on the life chances of a user. These systems have stringent requirements that must be followed before they can be deployed on the EU market.

• Systems with unacceptable risk are banned from sale on the EU Market and include those that:

  • Manipulate behaviour in a way that may result in physical or psychological harm
  • Exploit the vulnerabilities of a group based on their age, physical or mental disability, or socioeconomic status
  • Are used for social scoring by governments
  • Are used for real-time biometric monitoring in a public area by law enforcement or on their behalf except for those that meet strict criteria

However, the classification of systems used in insurance as high-risk is proving to be a debated topic. While the original proposal did not include these systems in the initial list of high-risk use cases, the Slovenian Presidency Compromise Text added systems used in insurance premium setting, underwritings and claims assessments as an additional use case under the public and private services section. Taking a more targeted approach, the Czech Presidency’s fourth compromise text proposes that only systems used for these purposes concerning health and life insurance constitute high-risk.

Obligations for high-risk systems

Eight use cases fall into this category:

  1. Biometric identification systems used for real-time and ‘post’ remote identification of people without their agreement
  1. Systems for critical infrastructure and protection of the environment, including those used to manage pollution
  1. Education and vocational training systems used to evaluate or influence the learning process of individuals
  1. Systems influencing employment, talent management and access to self-employment
  1. Systems affecting access and use of private and public services and benefits, including those used in insurance under the European Council’s presidency compromise text
  1. Systems used in law enforcement, including systems used on behalf of law enforcement
  1. Systems to manage migration, asylum and border control, including systems used on behalf of the relevant public authority
  1. Systems used in the administration of justice and democratic processes, including systems used on behalf of the judicial authority

These systems are subject to more stringent requirements than any other category, including the use of high-quality data, appropriate documentation practices, transparency, adequate human oversight, and testing for accuracy and robustness. As well as these requirements, identifications made by biometric systems cannot be used to inform actions or decisions unless the identification is verified by at least two people.

Following any major changes to the system, such as if the model is retrained on new data or some features are removed from the model, the system must then undergo additional conformity assessments to ensure that the requirements are still being met, before being re-certified and registered in the database.

EU Artificial Intelligence Act

A global gold standard

The EU AI Act’s sector-agnostic approach will help to ensure that there are consistent standards across the board to regulate AI. The rules impose obligations that are proportionate with the risk of the system, ensuring that potentially harmful systems are not deployed in the EU, while those associated with little or no risk can be used freely. Those that pose a high-risk will be constrained accordingly, while not preventing opportunities for innovation and development.

Penalties for non-compliance with the Act

It is imperative that organisations that may fall under the EU AI Act are aware of their obligations, as non-compliance comes with heavy penalties of up to €30 million or 6% of global turnover, whichever is higher. The severity of the fines depends on the level of the transgression, ranging from using prohibited automated systems at the high end; to supplying incorrect, incomplete, or misleading information, at the low end. For further information about charities associated with the Act, you can read our Penalties of the EU AI Act blog post.  

While the rules will only be enforced in the EU, they are likely to be adopted globally and have been termed GDPR for AI. Regulation like this will soon mean that AI worldwide is deployed more safely with greater accountability.

Get in touch with us at we@holisticai.com to find out more about how we can help you prepare for this and other upcoming regulations.

Written by Airlie Hilliard, Senior Researcher at Holistic AI. Follow her on Linkedin.

Manage risks. Embrace AI.

Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AI

Get Started