- Timeline: The EU AI Act is expected to pass the European legislative procedure by the end of this year (2023). Grace Period: There will be a grace period of 2 years (possibly 3) to prepare before the Act comes into force.
- The European Commission aims to foster early implementation of the Artificial Intelligence Act during the grace period through collaboration with the industry.
- An agreement has been reached on the final versions of some parts of the EU AI Act, including the provisions on some of the requirements for high-risk systems. However, some highly controversial elements, such as definitions, are yet to be finalised and remain uncertain.
- Consumer protection: The EU AI Act is being shaped as a piece of legislation providing a general framework focusing on consumer protection rather than product safety legislation.
- EU and US will diverge: The EU and the US are adopting different approaches to AI regulation.
- Standards and implementation acts: The Act does not provide clear guidelines on how to classify AI systems, implement requirements, or make assessments. It instead relies heavily on standards and implementing acts.
- Remaining uncertainties: Uncertainties about key topics like the classification of high-risk AI systems, liability, and enforcement have heightened industry interest in risk assessment, compliance tools, and protective measures.
The world’s first comprehensive AI regulation in the making
Discussions among the three major European institutions – the European Commission, the European Parliament, and the Council of the European Union – to finalise the text of the EU AI Act are still ongoing. We know that a deal has been reached on the final form of some provisions by all three European institutions during this Trilogue stage, whereas the uncertainty persists for others. Indeed, some of the highly controversial provisions, such as definitions, have not even been discussed yet.
This blog post explains the current situation of the European AI Act and the debates surrounding it. The key issues and updates about the Act are as follows:
1. Adoption timeline: The EU AI Act is expected to pass the European legislative procedure by the end of this year.
a. Both the Parliament and the Commission are pushing for the adoption of the EU AI Act, which is in line with the informal deadline provided by the Spanish presidency.
b. There are different reasons for this push, two of which are as follows:
- The EU institutions want to be the first ones to regulate AI in a comprehensive manner and create the so-called “Brussels Effect” in AI regulation as well.
- There are upcoming EU elections, which may shift the power balance in the institutions, thus preventing the EU AI Act from passing as it is now.
2. Grace period: There will be a grace period of 2 years (possibly 3) for the preparation before the entry into force of the EU AI Act.
- Given the significant requirements it brings, the EU AI Act needs a grace period.
- The grace period is commonly thought to be 2 years, similar to the GDPR. However, some sources mention the possibility of a 3-year grace period as well.
- A public statement by MEP Thierry Breton signals 2026 as the enforcement date. He stated that “[w]e cannot afford to sit back and wait for the new law to become applicable in 2026. This is why I have started working with AI developers on an "AI Pact" to anticipate its implementation”, which means there may be a 2-3 years of grace period.
3. AI Pact: The EU Commission will collaborate with the major industry players for the implementation of the EU AI Act during the grace period.
- A longer grace period may not be seen as compatible with the EU’s ambition to regulate and have a binding regulatory framework for AI as soon as possible. However, the Commission is determined to foster implementation of the EU AI Act even during the grace period. To this end, there is an initiative labelled as the AI Pact.
- There is no detail as to what the AI Pact entails for now. However, it is believed to fill the gap during the grace period by preparing the industry for the implementation of the EU AI Act early on.
4. The Trilogue continues: Some provisions have been approved during the Trilogue, but most of the highly controversial ones are not among them.
- Kai Zenner, the head of the office for Axel Voss – who is a shadow reporter of the EU AI Act – shared an update on the Trilogue, announcing that some parts of the EU AI Act had been approved, meaning that a deal had been reached as to the final text of some provisions by three major European institutions.
- Approved provisions include obligations of providers of high-risk systems, standards, conformity assessments, requirements for high-risk systems, and penalties, among others.
- Definitions and recitals are among the parts that have not yet been discussed.
- There are still many uncertain issues, such as the list of high-risk systems, the regulatory approach adopted for generative AI, and definitions. Based on the discussions and developments that took place after the Commission’s initial proposal, we anticipate that the final version will be significantly different than that and will include references to novel issues introduced by the Council and the Parliament. This is one of few times the Parliament has actually made this many significant structural changes to the initial Commission proposal.
- Organisational and supervisory requirements are imposed on notifying authorities, which are national authorities to be designated or established by the Member States pursuant to Article 30 of the EU AI Act. Data protection authorities all across the EU are being seen as the prospective national authorities under the EU AI Act, but there is a disagreement here at the policy-making level. It is debated whether DPAs are well-equipped for the job. Article 30 is among the approved provisions, but currently, there is no publicly available information as to the approved version.
5. Nature of the EU AI Act: The EU AI Act is being shaped as a piece of legislation providing a general framework with a focus on consumer protection rather than product safety legislation.
- The EU AI Act provides a generally applicable framework with its broad concepts and definitions for AI systems.
- Particularly with the amendments made by the Council and the Parliament, it can be classified not as product legislation but as consumer protection legislation.
- This focus on consumer protection is also being criticised on the grounds that it may hamper innovation.
6. Divergence between the US and the EU: The EU and the US are adopting different approaches to AI regulation.
- It is a common understanding that an efficient AI regulation cannot be established without international cooperation, given the cross-border transfer of data and services required for AI systems by design.
- We do not know how their final position will be shaped. However, it is almost certain from the statements given by both the EU and the US officials that the US and the EU are following and will follow different approaches.
- Here, standards may play an important role. As seen in some other technical fields, standards can build convergence between the US and the EU approaches to a certain extent.
7. The role of the standards and implementation texts: The EU AI Act does not provide clear guidelines on how to classify AI systems, implement requirements, or make assessments but relies heavily on standards and implementing acts.
a. In accordance with its nature, the EU Artificial Intelligence Act only provides general rules and principles and, in most cases, leaves the details to implementing acts as well as standards.
b. Implementing acts: There are many provisions granting the Commission the authority as well as the duty to adopt implementing acts governing the details of the respective provisions.
c. Standards: In line with the Recital 61 of the AI Act, stipulating that “standardisation should play a key role to provide technical solutions to providers to ensure compliance”, standards are expected to be an important policy tool in regulating AI under the EU AI Act.
- The European Commission has requested that the European standards organisations (CEN / CENELEC) develop technical standards for AI Act compliance in parallel with the legislative process. CEN / CENELEC has established the Joint Technical Committee 21 (CEN-CENELEC JTC 21 ‘Artificial Intelligence’) for the development of technical standards on AI.
- Accelerated work on standards when the EU AI Act is not yet binding is unusual, as standards are usually developed after a law is adopted. This can be read as another indication of the urgency with which the EU institutions are treating the AI Act.
- The ISO (International Organization for Standardization) is also actively working on the development of standards applicable to AI systems. Similar to CEN/CENELEC, the ISO has a joint technical committee with the International Electrotechnical Commission on AI standards (ISO/IEC JTC 1/SC 42).
- Technical standards are voluntary in principle. However, they have significant practical importance in AI regulation for two main reasons:
1. First, technical standards adopted by CEN/CENELEC and ISO are generally followed by industry participants and may turn into de facto industry standards over time.
2. Second, CEN and CENELEC are European Standard Organisations authorised to develop European standards, the compliance with which may trigger a presumption of conformity as per Article 40 of the EU AI Act for certain obligations and requirements, including the requirements for high-risk AI systems.
- On the other hand, access to standards is a practical problem. CEN-CENELEC and ISO are non-transparent organisations providing standards and other relevant content in exchange for payment.
8. Interest in AI GRC (Governance, Risk, and Compliance): The uncertainty regarding important issues such as the list of high-risk AI systems, liability, and enforcement creates industry interest in risk assessment and compliance tools as well as measures.
- How the requirements provided within the new EU legislation are to be met or what the evaluations will look like in practice are currently ambiguous and are among primary concerns.
- Accordingly, there is a general interest in risk assessment and auditing tools among stakeholders.
- Participants from the private sector, NGOs, and even some public organisations are interested in knowing ways through which they conduct assessments and audits for AI systems pursuant to the EU AI Act.
- The uncertainty surrounding the enforcement of provisions on documentation and accountability, particularly the question of how the liability shall be transferred downstream, potential harms, and the subsequent redress thereof, is frequently addressed by industry participants.
- The final list of high-risk/prohibited AI systems and associated requirements is uncertain and open to debate. Both generally drafted prohibitions and specific use-case-based approaches have supporters.
The EU AI Act is on the horizon. While the final text is still under discussion, it is widely acknowledged that its introduction will mark one of the most significant shifts in the regulatory landscape in recent years. Penalties could be as steep as €40m or 7% of global turnover, whichever is higher.
Your organisation needs to act early to be compliant, and the Holistic AI Platform is the optimal solution. The Platform enables enterprise-wide AI Governance and tracks AI regulations not just in the EU but worldwide.
Schedule a call with a member of our expert team for more details.
Authored by Osman Güçlütürk, Holistic AI Senior Policy Associate.