What is the EU AI Act?

September 12, 2023
Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Ayesha Gulley
Senior Policy Associate at Holistic AI
What is the EU AI Act?

First proposed on 21 April 2021, the European Commission’s proposed Harmonised Rules on Artificial Intelligence (EU AI Act), seek to lead the world in AI regulation and establish a global standard for protecting users of AI systems from preventable harm. Specifically, the rules aim to create an ‘ecosystem of trust’ that manages AI risk and prioritizes human rights in the development and deployment of AI.

What is the EU's Artificial Intelligence Act?

The EU AI Act is a significant piece of legislation that could have a major impact on the development and use of AI in the European Union. The Act is still in the early stages of development - only having recently passed - but it is clear that it has the potential to shape the future of AI in Europe.

Since first being proposed, an extensive consultation process has resulted in a number of amendments being proposed to the rules in the form of compromise texts, with a draft general approach adopted on 6 December 2022. This text was then debated and revised ahead of a European Parliament vote on 26 April 2023 before a political agreement was then reached on 27 April before a key committee vote on 11 May 2023, where, by majority vote, leading parliamentary committees accepted the adopted version of the text. Following this, the European Parliament voted on the amended version of the text on 14 June 2023, accepting the text by a large majority and paving the way for Trilogues to commence. The first sweeping legislation of its kind, the Act will have implications for countless AI systems being used in the EU.

Why has the EU AI Act been proposed?

While AI and the associated automation can offer many benefits, such as increased efficiency and accuracy, using AI also poses novel technical, legal, and ethical risks. Indeed, scandals have affected multiple high-risk applications of AI:

  • Amazon’s resume screening tool was retired before being deployed since it was biased against female candidates who included the word “women’s” (e.g. “women’s college”) in their resume.
  • Northpointe’s tool COMPAS, which was designed to predict recidivism, or the likelihood of a criminal reoffending, was also found to be biased. The tool predicted that black defendants were at higher risk of recidivism than they were, and were twice as likely to be misclassified as being high-risk for violent crimes than white defendants.
  • The algorithms used to determine car insurance premiums in the US gave residents of predominantly minority areas higher quotes than those living in non-minority areas with similar levels of risk.  

While existing laws do apply to AI, such as GDPR, they alone are not sufficient to prevent AI from causing harm due to the novel risks it can pose.

When will the AI Act come into effect?

It is forecasted that the Act will be enforced in 2026. That will be after the likely two-year implementation period that will follow the conclusion of the Trilogue stage, in which three institutions – the European Parliament, Council of the European Union, and the European Commission – align their respective positions on the AI Act. Trilogues began on 14 June 2023 and are expected to have produced a final text by the end of the year, ahead of the 2024 European Parliament elections.

This enforcement date is dependent on several stages of the EU legislative process, but significant progress has already been made. On 11 May 2023, the Civil Liberties and Internal Market committees of the European Parliament overwhelmingly approved proposed changes to the EU AI Act, with 84 votes in favour, seven against, and 12 abstentions. Similarly, the 14 June vote saw an overwhelming majority vote in favour of the Act, with 499 votes in favour, 28 against and 93 abstentions.

{{EU}}

Who has to comply with the EU AI Act?

Under the Act, providers of AI systems established in the EU must comply with the regulation, along with those in third countries that place AI systems on the market in the EU, and those located in the EU that use AI systems. It also applies to providers and deployers (formerly users) based in third countries if the system's output is used within the EU.

AI systems used in research, testing, and development activities before it is placed on the market or put into service are exempt, providing they are conducted respecting fundamental rights and other applicable laws and are not tested in real-world conditions. Further, the regulation does not apply to public authorities of third countries and international organisations when working within the framework of international agreements, or for AI systems exclusively developed or used for military purposes. In addition, AI components provided under free and open-source licenses are excluded, with the exception of foundational models.

What is the EU AI Act’s risk-based approach?

The EU AI Act outlines a risk-based approach, where the obligations for an AI system are proportionate to the level of risks it presents, taking into account factors such as the design and intended use. Based on risk level, the EU AI Act specifies corresponding requirements for documentation, auditing, transparency, and obligations. The Act establishes four distinct levels of risk, which are defined as follows:

  • Low-risk systems: Include spam filters or AI-enabled video games and comprise the majority of the systems currently being used on the market. These systems do not have any obligations under the rules in their current form but must comply with existing legislation.
  • Systems with limited risk: The obligations for these systems are related to transparency, where users must be informed that they are interacting with an AI system, that an AI system will be used to infer their characteristics or emotions, or that the content they are interacting with has been generated using AI. Examples are chatbots and deepfakes.
  • High-risk systems: Systems that can have a significant impact on the life chances of a user. These systems have stringent requirements to be followed before being deployed on the EU market, including risk management and data governance obligations.
  • Systems with unacceptable risk: Are banned from sale on the EU Market and include those that manipulate individuals without their consent or enable social scoring, as well as real-time and post remote biometric identification systems.

What are the high-risk systems under the AI Act?

According to Article 6, a system is considered high risk if it is intended to be used as a safety component of a product or is a product covered by the list of Union harmonization legislation  in Annex II and is required to undergo a third-party conformity assessment related to health and safety risks.

Additionally, 8 high-risk use cases are listed in Annex III:

  • Biometric and biometric-based systems identification systems used for biometric identification and make inferences about personal characteristics, including emotion recognition systems
  • Systems for critical infrastructure and protection of the environment, including those used to manage pollution
  • Education and vocational training systems used to evaluate or influence the learning process of individuals
  • Systems influencing employment, talent management and access to self-employment
  • Systems affecting access and use of private and public services and benefits, including those used in insurance under the European Council’s presidency compromise text
  • Systems used in law enforcement, including systems used on behalf of law enforcement
  • Systems to manage migration, asylum and border control, including systems used on behalf of the relevant public authority
  • Systems used in the administration of justice and democratic processes, including systems used on behalf of the judicial authority

Such systems are considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of individuals. While there is currently a lack of clarity on how to determine whether this risk is met, 6 months prior to the regulation coming into force, the European Commission will consult with the AI Office and relevant stakeholders to provide clear guidelines specifying the circumstances where outputs from these systems would pose a significant risk to the health, safety or fundamental rights of natural persons. In addition, systems used for critical infrastructure are considered high-risk if they pose a significant risk of harm to the environment.

The most recent text also added that AI systems to influence voters in political campaigns and recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) could be considered high-risk, being categorised as a system used in the administration of justice and democratic processes.

A recent addition, the Act now allows providers of high-risk systems to notify relevant supervisory authorities if they do not deem their system to pose significant risks, defined by Article 3 as a risk that, as a result of its combined severity, intensity, probability, and duration, could significantly affect an individual or group. Upon receiving the notice, the authority will have three months to review and object if they do consider the system posing a significant risk.  

What are the obligations for high-risk systems?

High-risk systems are subject to more stringent requirements than any other category. Although obligations can vary by the type of entity associated with the system, in general, there are seven requirements for high-risk systems:

  • A continuous and iterative risk management system must be established throughout the entire lifecycle of the system (Article 9)
  • Data governance practices should be established to ensure the data for the training, validation, and testing of systems are appropriate for the system’s intended purpose (Article 10)
  • Technical documentation should be drawn up before the system is put onto the market (Article 11)
  • Record-keeping should be facilitated by ensuring the system is capable of automatic recording of events (Article 12)
  • Systems should be developed in a way to allow the appropriate transparency and provision of information to users (Article 13)
  • Systems should be designed to allow appropriate human oversight to prevent or minimise risks to health, safety, or fundamental rights (Article 14)
  • There should be an appropriate level of accuracy, robustness and cybersecurity maintained throughout the lifecycle of the system (Article 15)

For providers of foundational models specifically, a description of the data sources used in development is also required. Additionally, identifications made by biometric systems cannot be used to inform actions or decisions unless the identification is verified by at least two people with the necessary competence, training, and authority.

To ensure compliance with the relevant obligations, conformity assessments must be carried out. The system must then be registered in the EU database and should bear the CE marking to indicate their conformity before it can be placed on the market. If there are any substantial modifications made to the system, including retraining on new data or adding or removing features from the model, the system must then undergo a new conformity assessment to ensure that obligations are still being met before it can be re-certified and registered in the database.

EU Artificial Intelligence Act

What practices are prohibited under the EU AI Act?

Article 5 prohibits the following practices that are deemed to pose too high of a risk:

  • The use of AI systems that deploy subliminal techniques beyond consciousness or purposefully manipulative or deceptive techniques that are deployed with the intention of materially distorting behaviour and impairing their ability to make an informed decision to cause significant harm
  • AI systems that exploit the vulnerabilities of a person or specific group of people – including groups based on personality traits, social or economic status, age, physical disability, or mental ability – with the objective of distorting behaviour to cause significant harm
  • Biometric categorization systems that categorise individuals according to sensitive or protected attributes or based on the inference of those attributes
  • Systems used for social scoring, evaluation or classification based on social behaviour or personal or personality characteristics if it leads to the negative treatment of individuals or groups outside of the context that the data was collected in or if the treatment is disproportionate to the social behaviour
  • Systems to assess the risk of (re)offending or a criminal or administrative offence using profiles of personality traits and characteristics or past criminal behaviour
  • Emotion recognition systems by law enforcement, for border management, or in a workplace or educational setting
  • Indiscriminate and untargeted scraping of biometric data from the internet (including social media) or CCTV footage to create or expand facial recognition databases
  • Real-time remote biometric identification systems in publicly accessible spaces
  • Post remote biometric identification systems used to analyse recorded footage from publicly accessible spaces, except from in the case of targeted search in connection to specific serious criminal offences if they receive judicial authorisation

What are the penalties for non-compliance?

It is imperative that organizations that may fall under the EU AI Act are aware of their obligations as non-compliance comes with steep penalties of up to €30 million or 6% of global turnover, whichever is higher. This severity of fines will depend on the level of transgression, ranging from using prohibited automated systems at the high end; to supplying incorrect, incomplete, or misleading information, at the low end which can result in fines of up to 10 million euros or up to 2% of turnover.

{{EU}}

Prepare for a global impact

The EU AI Act will be a landmark piece of regulation and seeks to become the global gold standard for AI regulation with its sector-agnostic approach, which will help to ensure that there are consistent standards across the board to regulate AI. The rules impose obligations that are proportionate with the risk of the system, ensuring that potentially harmful systems are not deployed in the EU, while those associated with little or no risk can be used freely. Those that pose a high-risk will be constrained accordingly, while not preventing opportunities for innovation and development.

The Act will have far-reaching implications, affecting entities that interact with the EU market, even if they are based outside of the EU. There are considerable obligations to comply with, particularly for high-risk AI systems, and navigating the text is no small feat. Getting prepared early is the best way to ensure compliance and that obligations are met. To find out more about how Holistic AI can help you prepare to be compliant, get in touch at we@holisticai.com.

EU AI Act - Frequently Asked Questions

Here are some frequently asked questions (FAQs) about the EU AI Act, providing clarification on regulations surrounding artificial intelligence in the European Union.

What is an Unacceptable Risk under the EU AI Act?

Under the terms of the AI act, Unacceptable-Risk systems are those which are exploitative, manipulative, or use subliminal techniques. Systems classified as unacceptable risk are banned in the EU. Examples include real-time biometric identification technologies, or any other system that violates human rights.

What costs should we expect from the EU's AI Act?

The costs associated with violating the Act are severe. Failure to comply could result in a fine of up to €40million or 7% of global turnover, whichever is higher. These figures, which are taken from the latest version of the Act, are even higher than previous iterations.

What are the High-Risk sectors for the EU AI Act?

High-Risk systems are those which have the potential to significantly impact the life chances of a user – systems used in democratic processes or law enforcement, for example. They have the most stringent reporting requirements.

What is the argument against AI regulation?

Some argue that overly strict regulation in the domain of AI could stifle innovation, depriving the world of the immense societal benefits that the technology can bring. However, there is now a near-universal consensus among academics, lawmakers, and wider society that regulation is needed in order to mitigate the equally serious risks AI can pose.

Are you ready for the EU AI Act?

Avoid hefty penalties and achieve AI governance

Learn More

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call