The EU AI Act’s Risk-Based Approach: High-Risk Systems and What They Mean for Users

The use of AI is proliferating globally across all sectors. While this can have many benefits including increased efficiency and greater accuracy, the use of these systems can pose novel risks. As such, policymakers around the world are starting to propose legislation to manage these risks.
Download our latest
White Paper

The use of AI is proliferating globally across all sectors. While this can have many benefits including increased efficiency and greater accuracy, the use of these systems can pose novel risks. As such, policymakers around the world are starting to propose legislation to manage these risks.

Leading these efforts is the European Union (EU) with its proposed rules on Artificial Intelligence, also known as the EU AI Act. Expected to become the global gold standard for AI regulation, the EU AI Act sets out a risk-based approach, where the obligations for a system are proportionate to the level of risk that it poses. Specifically, the Act outlines four levels of risk:

  • Low risk systems - includes spam filters or AI-enabled video games, and comprise the majority of the systems currently being used on the market.
  • Limited or minimal risk systems - systems that i) interact with humans, ii) detect humans or determine a person’s categorisation based on biometric data, or iii) produce manipulated content. These systems include chatbots and those used to produce deep fakes and have transparency obligations.
  • High-risk systems - systems that can have a significant impact on the life chances of a user. There are eight types of system that fall into this category: biometric identification systems, systems for critical infrastructure and protection of the environment, education and vocational training systems, systems used in employment, talent management and access to self-employment, systems affecting access and use of private and public services and benefits, including those used in insurance, systems used in law enforcement, systems to manage migration, asylum and border control, and systems used in the administration of justice and democratic processes, including systems used on behalf of the judicial authority. These systems are subject to stringent obligations and must undergo conformity assessments before being put on the EU market.
  • Systems with unacceptable risk - those that manipulate behaviour in a way that may result in physical or psychological harm, exploit the vulnerabilities of a group based on their age, physical or mental disability, or socioeconomic status, are used for social scoring by governments or are used for real-time biometric monitoring in a public area by law enforcement. These systems are not permitted to be sold on the EU market.

Download our whitepaper below to find out more about the high-risk systems outlined by the EU AI Act and what they mean for users.

Written by Osman Gazi Güçlütürk, Member of the Mirror Committee on AI.

The EU AI Act’s Risk-Based Approach: High-Risk Systems and What They Mean for Users

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call