What Enterprises Need to Know About the EU AI Act

December 13, 2023
Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
What Enterprises Need to Know About the EU AI Act

Key takeaways

  • The EU plans to adopt the AI Act within the next year.
  • The AI Act is set to be the “GDPR for AI”, with hefty penalties for non-compliance, extra-territorial scope, and mandatory requirements for businesses.
  • The AI Act will shine a spotlight on AI risk, significantly increasing awareness of the importance of responsible AI among businesses, regulators and the wider public.
  • Enterprises should act now to establish AI risk management frameworks, to minimise the legal, reputational and commercial damage.

What is the EU AI Act?

The EU AI Act is the EU’s proposed law to regulate the development and use of ‘high-risk’ AI systems, including those used in HR, banking and education.

The EU AI Act was first proposed by the European Commission in April 2021. It will be the first law worldwide which regulates the development and use of AI in a comprehensive way.

The AI Act is set to be the “GDPR for AI”, with hefty penalties for non-compliance, extra-territorial scope, and a broad set of mandatory requirements for organisations which develop and deploy AI.

Any enterprise operating in or selling into Europe should be aware of the wide-ranging implications of the Act and take steps to ensure readiness with its provisions.

When will the AI Act be adopted?

Adopting the AI Act remains a top priority for the EU institutions.

The EU plans to adopt the AI Act within the next year. The absolute deadline is February 2024, due to the European Parliament elections and appointment of the new European Commission in May 2024.

The next few months will be dominated by the European Parliament (MEPs) and Council of Ministers (EU member state governments) negotiating their respective positions.

The European Commission has requested that the European standards organisations (CEN / CENELEC) develop technical standards for AI Act compliance in parallel with the legislative process. The Commission has requested that the standards are completed by the time the Act is adopted, at latest, in 2024. This is unusual, as standards are usually developed after a law is adopted. It demonstrates the urgency with which the Commission is treating the AI Act.

What are the outstanding issues in the ongoing negotiations?

Before the AI Act can be adopted, the EU institutions need to reach agreement on the final text. The outstanding issues dominating the process are:

  • The list of high-risk AI use cases: Some groups are advocating for a delimited and proportionate list of ‘high-risk AI use cases’ (to which the AI Act’s mandatory requirements apply), whereas others want the list to be more expansive and to capture a greater number of plausible AI risks. By continuing to add new use cases, like insurance, they are taking a “Christmas tree approach”.
  • The definition of AI: In the Commission’s original proposal, the definition was very broad, including “output generating software based on statistical approaches”. Industry bodies are pushing for a tighter definition, but some groups oppose this.
  • Allocation of responsibilities between AI ‘providers’ and ‘users’: The original proposal places most of the compliance obligations on providers (i.e., the developers of AI systems). Industry bodies argue that this balance is lopsided, as it does not adequately reflect the ways in which the users (i.e., organisations who purchase AI from providers and deploy it) can influence how the system operates and what impact it has.
  • General purpose AI: Industry bodies are pushing back against proposals that the providers and users of general purpose AI systems must follow all the mandatory requirements of high-risk AI systems. They say this would stifle innovation. In the Czech Presidency’s latest compromise proposal, the Commission would be obliged to adopt future regulation which specifies how the AI Act’s requirements should be applied to general purpose AI systems, based on an impact assessment.
  • Facial recognition: MEPs are expected to push for outright bans on facial recognition technology, in the context of real-time identification of individuals by law enforcement authorities.

Which issues are agreed upon?

Some issues are not proving contentious, meaning it is high likely that the adopted AI Act will reflect these positions:

  • GDPR style provisions, including:

        ○ Hefty fines for non-compliance: Up to €30m or 6% of global annual turnover
           (whichever is higher), for the most serious offences.

        ○ The Act is extra-territorial: It will apply to any company worldwide, if they are selling into or using their AI system in the EU.

        Grace period: Once the Act is adopted, businesses will likely have a grace period before any enforcement action starts.

  • Mandatory requirements for high-risk AI systems

       ○ There is widespread agreement regarding the Commission’s proposals
           for the mandatory requirements for providers and users of high-risk AI systems,
           including risk management frameworks, conformity assessments,
           quality assurance testing and technical documentation/ record keeping.

How will enterprises be expected to comply with the AI Act?

Enterprises will face a multitude of compliance obligations when the AI Act is adopted.

Conformity assessments and technical standards will be key for enterprises

Organisations will have to establish AI risk management frameworks and undertake conformity assessments to demonstrate that their AI systems are compliant with the Act. This is estimated to result in compliance costs of between €200,000-€330,000 per company.

The technical standards developed by CEN / CENELEC are voluntary, but organisations who follow and adopt them will benefit from a presumption of conformity with the AI Act (in the relevant area).

How does the AI Act relate to the AI Liability Directive?

On 28 September, the European Commission published proposals for an AI Liability Directive.

The AI Act and the AI Liability Directive are two sides of the same coin.

The AI Act is designed to prevent harm caused by AI, whereas the AI Liability Directive is designed to ensure that victims are fairly compensated if harm occurs.

The Directive will make it easier for individuals to claim damages against companies for harm caused by their AI system.

By exposing enterprises to the possibility of being held liable and having to pay compensation for harm caused by their AI systems -- and directly linking non-compliance with AI Act to liability for AI-induced harm -- the Directive incentivises compliance with the AI Act.

How should enterprises prepare for the AI Act?

It will not be long before enterprises that develop and deploy AI systems will be obliged to comply with the AI Act’s broad set of requirements.

Forward thinking enterprises should therefore act now to establish AI risk management frameworks, to minimise the legal, reputational and commercial damage which falling foul of the AI Act could result in.

Just like the GDPR did for privacy, the AI Act will shine a spotlight on AI risk, significantly increasing awareness of the importance of responsible AI among businesses, regulators and the wider public.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call