The latest version of the EU AI Act (AIA) has been passed by the European Parliament with a large majority and will now proceed to the final Trilogue stage.
The EU AI Act (AIA) is a landmark piece of legislation proposed by the European Commission in 2021 to regulate the AI systems available in the EU Market. Taking a risk-based approach to regulation, systems are classed as posing minimal, limited, high or an unacceptable level of risk, with obligations proportionate to the system’s classification. An extensive consultation process saw several compromise texts proposed, with the Draft General Approach adopted on 6 December 2023. This text was then debated and revised ahead of a European Parliament vote on 26 April 2023, with a political agreement reached on 27 April.
By majority vote, leading parliamentary committees accepted the revised text on 11 May 2023 ahead of a European Parliament Vote on 14 June 2023, where the May text was passed with a large majority. Following the vote, a press conference was led by European Parliament President Roberta Metsola and rapporteurs Brando Benifei and Dragoş Tudorache. This vote kicks off the final stage of the legislative process, where three EU institutions will convene Trilogues to negotiate the final version of the text. Proceedings will commence tonight and are predicted to conclude by the end of the year.
Key takeaways from today’s press conference:
Intent on creating global alignment in regulatory language on AI, the voted-through compromise text is more aligned with the OECD’s definition, defining AI as:
“a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
For reference, the OECD defines AI as:
“a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”
The Draft General Approach adopted in December, however, defined AI as:
“A system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts.”
This represents significant progress in creating greater alignment in how AI is defined and therefore the systems that fall into the scope of regulation – as was reiterated several times throughout the press conference, this is critical to creating a global standard for AI regulation.
A system is considered a HRAIS if it is covered under Annex III of the EU AI Act and poses significant risks to an individual’s health, safety, or fundamental rights. The second sufficiency condition was added to the compromise text, with the option for providers of such systems to appeal to relevant authorities if they deem their systems to not pose such risks. The authority then has three months to review and object to this notification in cases where they do foresee the system resulting in significant risk.
Consistent with the text adopted by the leading committees in the EU Parliament in May, the voted compromise text covers eight High-Risk categories, which are:
The voted compromise text introduces a new section (Article 28b) to specifically govern Foundation Models. This section directs providers of Foundation Models to integrate design, testing, data governance, cybersecurity, performance, and risk mitigation safeguards in their products before placing them on the market, so that foreseeable risks to health, safety, human rights, and democracy are mitigated. Further, the text mandates providers of such models to comply with European environmental standards and register their applications in a database, which will be managed by the European Commission.
Also covered under the same section, Generative AI services will be subject to stricter transparency obligations, with providers of such applications required to inform users when a piece of content is machine-generated, deploy adequate training and design safeguards, ensure that synthetic content generated is lawful, and publicly disclose a summary of copyrighted materials used to develop their models.
Tudorache speculated that an early enforcement date could be negotiated for generative AI and foundational models, although the role of international voluntary codes of conduct before legal requirements come into effect was also emphasised.
With the protection of EU citizens at the heart of this legislation, the law will prohibit AI systems that are deemed to pose an unacceptable level of risk. The May version of the text expanded the list of prohibited practices, including AI models used for biometric categorisation, predictive policing, and the collection of facial images for database construction.
Under this text, remote biometric identification in public spaces is banned, although this was opposed during the May vote to leave leeway for law enforcement if needed. However, the European Parliament’s final position on this, according to the 14 June vote, is to move forward with a total ban. Although concerns were raised about the implications this could have for France’s plans to use AI-assisted crowd control - which would be banned if the AIA came into effect ahead of the event - for the 2024 Olympics, , the draft law permitting this states that biometric information is not used by the technology.
In addition to codifying compliance requirements for AI applications, Tudorache and Benifei reiterated their commitment to ensure that the EU AI Act also helps catalyse European AI innovation. The text has provisions on establishing AI sandboxes and provides guidance on support provided to small and medium enterprises (SMEs) deploying automated systems. The need to establish harmonised standards for the implementation of the Act’s provisions was mentioned, with Tudorache stressing that their development will be bottom-up – as opposed to traditional top-down approaches, signalling the Act’s intent to democratise rulemaking processes for AI systems.
When asked about the prospect of the AI Act creating another Brussels Effect like the GDPR, the co-rapporteurs signalled the increasing need for international cooperation for the responsible governance of AI and automated systems. With international alignment on the definition of AI, the AI Act is expected to complement parallel regulatory efforts in countries like the US, as well as multilateral approaches undertaken by the G7 Hiroshima AI Process and the Global Partnership on AI.
Protecting EU citizens while not stifling innovation is a challenging task but one that is aided by the Act’s risk-based approach, ensuring that obligations are not disproportionate to the risk posed by the system. To support the implementation of the Act, the rapporteurs stressed the role that education and upskilling of EU citizens will play in ensuring that citizens have the awareness and competence needed to benefit from the offerings of AI while also protecting them from avoidable harm.
The voted compromise text also includes redress mechanisms for citizens to ensure harms are resolved promptly and adequately. Indeed, fundamental rights assessments will play a key role in protecting the rights of citizens, with a new requirement for conducting Fundamental Rights Impact Assessments added for high-risk systems to consider the potential negative impacts of an AI system on marginalised groups and the environment.
Although negotiations on the EU AI Act are expected to be finalised by the end of 2023, there will likely be a two-year period before the law comes into effect, meaning it will not be a legal requirement prior to the 2024 EU elections. While concerns have been raised about how the spreading of misinformation online might impact votes, particularly since the voting age is 16 in some countries and younger voters are likely to get their information online, the panel highlighted the importance of the Digital Services Act (DSA) in regulating content on hosting services, marketplaces, and online platforms. Due to come into effect on 17 February 2024, with earlier obligations for designated Very Large Online Platforms and Very Large Online Search Engines including independent audits, the DSA sets out requirements for handling disinformation, as well as other risks such as discrimination and illegal content. While the EU AI Act will have more sweeping requirements and target a broader range of harm, the DSA will have important implications in the meantime, particularly for upholding the integrity of the EU elections.
It is not long until the EU AI Act will have significant implications for businesses around the world, requiring considerable action to ensure compliance. Getting prepared early is the best way to fulfil the obligations and ensure that preventable harm does not occur. Get in touch at email@example.com to find out how Holistic AI can help you on your compliance journey.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts