The EU AI Act is a landmark piece of legislation proposed by the European Commission in 2021 to regulate the AI systems available in the EU Market. Taking a risk-based approach to regulation, systems are classed as posing minimal risk, limited risk, high risk, or unacceptable levels of risk, with obligations proportional to the level of risk posed by a system.
After several iterations of the text, a general approach was adopted on 6 December 2022. This text was then debated and revised ahead of a European Parliament vote on 26 April 2023. A political agreement was then reached on 27 April ahead of a key committee vote on 11 May 2023.
By majority vote, leading parliamentary committees have accepted the adopted version of the text, with a tentative plenary adoption date set for 14 June 2023, representing an important step forward for the Act.
A number of changes have been made to the text of the Act following the adoption of the general approach, with an important step being made towards the standardisation of how artificial intelligence is defined.
The December text defined AI
“a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts.”
The text is now more aligned with the OECD definition, defining AI as:
“a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
For reference, the OECD defines AI as:
“a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”
Previous versions of the text failed to explicitly define significant risk, or even risk for that matter, despite referring to them several times. The adopted text, however, defines ‘risk’ as
“the combination of the probability of an occurrence of harm and the severity of that harm”
And significant risk as
“a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and its the ability to affect an individual, a plurality of persons or to affect a particular group of persons”.
Given the risk-based approach of the Act, this is an important step forward in defining and consolidating its approach, although these definitions are still subjective and open to individual interpretation.
The December text specified eight broad applications of AI that were considered high-risk, with more precise applications within this: Biometrics, Critical infrastructure, Education and vocational training, Employment, workers management and access to self-employment, Access to and enjoyment of essential private services and essential public services and benefits, Law enforcement, Migration, asylum and border control management, and Administration of justice and democratic processes.
A notable change in the adopted text is the specification of biometrics and biometrics-based systems, expanding the scope of the category from remote biometric identification systems to cover those intended to be used for “biometric identification of natural persons and AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems”. This is with the exception of those mentioned in Article 5, which outlines prohibited practices.
The adopted text also makes an important update to allow providers of high-risk AI systems to inform the national supervisory authority in the event that they do not deem their system to pose a significant risk of harm to health, safety, fundamental rights or the environment. The authority then has three months to review and object to this notification where they do foresee the system resulting in significant risk.
Annex III has also been amended, making wording narrower under the definition of the following high-risk categories: critical infrastructure, education, and access to essential services. Conversely, the categories of law enforcement, migration control and administration of justice have been broadened. Another addition to the Annex, are recommender systems of social media platforms designated as VLOPs under the EU’s Digital Services Act.
A completely new obligation has been introduced for users of High-Risk AIs – mandating them to conduct a Fundamental Rights Impact Assessment, which is envisaged to consider potential negative impacts an AI system may have on marginalised groups and the environment. Existing obligations for high-risk systems have also been made more comprehensive under the adopted text, particularly concerning those on documentation requirements, data protection impact assessments, notifications, and risk management.
Parliament has reached a majority consensus on expanding the list of prohibited practices, which now includes several additional bans. These bans include AI models used for biometric categorization, predictive policing, and the collection of facial images for database construction. Despite some contention, biometric identification systems, which were originally allowed on a case-by-case basis, such as law enforcement access are now completely banned. Similarly, remote biometric identification in public spaces was up for a total ban, however this was opposed during the vote to leave leeway for law enforcement if needed.
After much deliberation, the European Parliament has agreed to adopt a tier-based approach to governing General Purpose AI systems (GPAIs) -- with different obligations for GPAIs, Foundation Models and Generative AI applications.
Updated requirements on operators across the AI value chain are expected to become more proportional, with stricter obligations on downstream players who make significant modifications to a GPAI model. Additionally, providers of GPAIs will be expected to support the compliance requirements of downstream operators by making available all relevant documentation of the model used.
The legislation also includes a new article for Foundation Models (Article 28b), which stipulates stricter obligations on Robustness, Risk Management, Data Governance and Transparency for these models, in addition to complying with European environmental standards. Highly stringent obligations on design, performance risk management, transparency and independent vetting are expected to be imposed on Generative AI applications like ChatGPT, with providers of such systems mandated to disclose all copyrighted materials used to develop these models.
Likely to become the global gold standard for AI regulation, the EU AI Act will have important implications for the fairness and safety of AI systems available on the EU market, protecting those interacting with the systems from preventable harm. However, this will also see deployers and users of AI systems facing a number of obligations.
The adoption of this text and setting of a plenary adoption date is an important signal that significant progress is being made with the Act. The best way to ensure compliance is to get started on your risk management journey early. To find out how Holistic AI can help you with this, get in touch at firstname.lastname@example.org.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts