Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Penalties of the EU AI Act: The High Cost of Non-Compliance

Authored by
Osman Gazi Güçlütürk
Legal & Regulatory Lead in Public Policy at Holistic AI
Siddhant Chatterjee
Public Policy Strategist at Holistic AI
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Feb 18, 2024
read time
0
min read
share this
Penalties of the EU AI Act: The High Cost of Non-Compliance

The EU aims to lead the world in Artificial Intelligence (AI) regulation with its proposal for Harmonised Rules on Artificial Intelligence (EU AI Act). It seeks to lay out a normative framework so that risks of AI systems are managed and mitigated to build trust in AI systems used in the EU and protect the fundamental rights of EU citizens.

In doing so, the EU AI Act introduces a risk-based approach for AI systems, which defines three levels of risk for AI systems: minimal, high, and unacceptable. The Act  classifies general purpose AI (GPAI) models according to their systemic impact and also subjects AI systems interacting with users to a set of transparency obligations.  Penalties for non-compliance follow a tiered system, with more severe violations of obligations and requirements carrying heftier penalties.

The EU AI Act was first proposed in 2021, and has since undergone an extended consultation process filled with further amendments. Some of the highlights of this process include:

This process has seen changes made throughout the text, including the obligations of implicated entities and penalties for violating such obligations.  The text is now at the final stages of the EU lawmaking procedure and pending approval by the Parliament.

At Holistic AI, we’ve followed and presented guidance throughout the stages of the EU AI Act’s drafting, and are presenting this post to outline penalties of the AI Act under the most recent version of the text.

Key Takeaways

  • The regulation sets forward a three-tier structure for fines against infringements by AI system operators in general, supplemented by additional provisions for providers of the GPAI models and for Union agencies.
  • The heftiest fines are imposed for violations related to prohibited systems of up to €35,000,00 or 7% of worldwide annual turnover for the preceding financial year, whichever is higher.
  • The lowest penalties for AI operators are for providing incorrect, incomplete, or misleading information, up to €7,500,000 or 1% of total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Penalties for non-compliance can be issued to providers, deployers, importers, distributors, and notified bodies.

What is the tiered approach the EU AI Act takes for penalties?

The initial proposal for the EU AI Act by the Commission introduced a three-tier approach for the penalties, a structure which was maintained under the Council's General Approach. However, this was later modified into a four-tiered model in the Parliament's position.

With this said, the latest draft by the Council of the EU reintroduced a three-tier approach to penalties under Article 71, some of which surpass the hefty fines of GDPR, which are a maximum of €20,000,000.

Broadly, penalties under the EU AI Act target three distinct parties: operators of AI systems, providers of general purpose AI models, and Union institutions, agencies, and bodies. All three tiers apply to operators, while only the bottom tier applies to providers of general purpose AI systems. Union bodies have their own penalty systems.

Penalties of the EU AI Act

Administrative fines against AI system operators

Tier 1: Non-compliance with the prohibitions

The heftiest fines are given for  using systems (or placing systems on the market) prohibited under the AI Act due to the unacceptable level of risk that they pose. These instances are subject to fines of up to €35,000,000 or up to 7% of annual worldwide turnover for companies. This surpasses the penalties under GDPR, therefore imposing some of the heftiest penalties for non-compliance in the EU.

These penalties are incurred for using any of the following systems in the EU:

  • AI systems that deploy subliminal, purposefully manipulative, or deceptive techniques materially distorting a person’s behavior and causing them to make a decision that they would not have otherwise taken in a manner that is likely to cause significant harm,
  • Biometric categorization systems that individually categorize  natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation,
  • AI systems for the evaluation or classification of people over a certain period of time based on their social behavior or personal or personality characteristics, with the social score leading to the detrimental or unfavorable treatment of people,
  • Real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement,
  • AI systems for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offense, based solely on the profiling of a natural person or on assessing their personality traits and characteristics,
  • AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage,
  • AI systems to infer the emotions of people in the areas of workplace and education institutions.

Tier 2: Non-compliance with obligations

The second highest fines are set forth for non-compliance with specific obligations for providers, representatives, importers, distributors, deployers, notified bodies, and users. Non-compliance with the relevant provisions is subject to fines of up to €15,000,000 or up to 3% of annual worldwide turnover for companies.‍

These penalties are incurred by not meeting the following provisions on obligations:

1. Obligations of the providers of HRAIs under Article 16

The following obligations apply to providers of High-Risk AI systems (HRAI):

  • Ensuring that their HRAI is compliant with the HRAI requirements:
    • Risk management (Art. 9) - Providers must continuously identify and analyze foreseeable risks for the health, safety, and fundamental rights of the individuals for the entire life cycle of the system and take suitable measures to minimize and eliminate the risks,
    • Data governance (Art. 10) - The training, validating, and testing of data sets must meet the quality criteria in terms of relevance, collection of the data process operations and examination of possible biases. Data governance and management must ensure, to the best extent possible, the use of relevant and representative data sets,
    • Technical documentation (Art. 11) - Technical documentation is needed to demonstrate compliance with the requirements and must be up to date. It should be a clear and comprehensive form of assessment regarding compliance,
    • Record-keeping (Art. 12) - There must be an automatic recording of events over the duration of the life cycle of the system for tracing the appropriate functioning of the system and identification of possible breaches,
    • Transparency (Art. 13) - Providers shall accompany clear instructions with the systems and clearly indicate the characteristics, capabilities, and limitations of the system,
    • Human oversight (Art. 14) - There must be appropriate human interface tools implemented within the systems so that the system can be overseen by a natural person. The natural person shall be able to understand, interpret, and intervene the system,
    • Accuracy robustness and cybersecurity (Art. 15) - Throughout the life cycle of the system, it must be resilient to fault and error, technically redundant, and have fail-safe plans. Technical solutions must be ready to take appropriate measures against attacks and vulnerabilities.
  • Indicating their name or trademark and address on the HRAI or on its packaging,
  • Having a quality management system (Art. 17),
  • Keeping documentation (Art. 18),
  • Keeping logs that are automatically generated by the HRAI when the HRAI is under their control (Art. 20),
  • Taking the necessary corrective actions when applicable (Art. 21),
  • Ensuring that the HRAI undergoes conformity assessment prior to its placing on the market or putting into service (Art. 43),
  • Drawing up an EU declaration of conformity (Art. 48),
  • Affixing the CE marking on the HRAI, indicating its conformity (Art. 49),
  • Complying with the registration obligation (Art. 51),
  • Demonstrating the conformity of the HRAI upon a request by a national competent authority,
  • Ensuring that the HRAI complies with accessibility requirements.

2. Obligations of authorized representatives under Article 25

An “authorized representative” is defined as:

“any natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by [the EU AI Act].”

In short, authorized representatives must act in accordance with the mandate received from the provider. This mandate must empower the representative to do the following:

  • Verify that the EU declaration of conformity and the technical documentation have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider
  • Keep the contact details of the provider, a copy of the EU declaration of conformity, the technical documentation, and, if applicable, the certificate issued by the notified body at the disposal of the national authorities for a period ending ten years after the HRAI has been placed on the market or put into service
  • Provide a national competent authority, upon a reasoned request, with all the information and documentation necessary to demonstrate the conformity of an HRAI with the requirements
  • Cooperate with competent authorities, upon a reasoned request, on any action the latter takes in relation to the HRAI
  • Comply with the registration obligations or ensure that the information provided is correct if the registration is carried out by the provider

3. Obligations of the importers of HRAIs under Article 26

An “importer” is defined as

any natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union.”

In short, importers must ensure that the HRAI complies with the regulation. Most importantly, they shall verify that the conformity assessment, technical documentation, CE conformity marking, and established authorized representative are in line with the EU AI Act.

4. Obligations of the distributors of HRAIs under Article 27

A “distributor” is defined as

“any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.”

Similar to importers, distributors also must verify the conformity of the HRAIs to the EU AI Act and cooperate with the competent authorities.

5. Obligations of the deployers of HRAIs under Article 29

A “deployer” is an entity using an AI system under its authority, except where the AI system is used in the course of a personal, non-professional activity.

Deployers of the HRAIs primarily have the following obligations:

  • Taking appropriate technical and organizational measures to ensure they use such systems in accordance with the instructions of use accompanying the systems,
  • Ensuring that the people assigned to ensure human oversight of the HRAIs have the necessary competence, training, and authority, as well as the necessary support,
  • Ensuring that input data is relevant and sufficiently representative in view of the intended purpose of the HRAI, to the extent the deployer exercises control over the input data,
  • Monitoring the operation of the HRAI on the basis of the instructions of use and informing providers when required,
  • Keeping the logs generated automatically by that HRAI to the extent such logs are under the deployer’s control for a period appropriate to the intended purpose of the HRAI, of at least six months, unless provided otherwise in applicable Union or national law.

There are additional specific obligations for deployers that are financial institutions.

6. Requirements and obligations of notified bodies under Articles 33, 34(1), 34(3), 34(4), 34a

A notified body is a conformity assessment entity that has been designated in accordance with the AI Act and other relevant Union harmonization legislation.

Such bodies must be organizationally capable, have adequate personnel, and ensure confidence in conformity assessment. They must ensure the highest degree of professional competence and impartiality while carrying out tasks and shall be economically independent of the providers of HRAIs and other operators.

7. Transparency obligation for providers and users of certain AI systems under Article 52

If an AI system is designed and deployed to interact with natural persons, providers and users must inform natural persons in a clear and distinguishable manner that they are interacting with an AI system unless this is obvious from the point of view of a reasonable natural person who is reasonably well-informed, observant, and circumspect. Systems used for biometric categorization, emotion recognition, and deep fake generation also must disclose this to natural persons.

Tier 3: Supplying incorrect, incomplete, or misleading information to the authorities

Failure to supply the correct or incomplete information is a violation of Article 23 of the Regulation, which requires cooperation with component authorities. Providers of HRAIs shall, upon request by a competent national authority, provide authorities with all the information and documentation necessary to demonstrate the conformity of the HRAI with the requirements set out.

Replying with incorrect, incomplete, or misleading information to a request of national authorities or notified bodies is subject to fines up to €7,500,000 or 1% of the total worldwide turnover for companies.

Are there any considerations for SMEs?

In the case of SMEs, including start-ups, each fine in these three Tiers shall be up to the lower of the percentages or amount. On the other hand, offenders who are not SMEs would have to pay the higher of the two. For instance, if 3% of annual turnover was greater than €15,000,000, then businesses would pay the 3% while SMEs would pay €15,000,000. On the other hand, if 3% of annual turnover was less than €15,000,000, then SMEs would pay 3% while businesses would pay €15,000,000.

Administrative fines against providers of GPAI models

A general purpose AI (GPAI) model is defined as:

“an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.”

This does not cover AI models that are used before release on the market for research, development and prototyping activities.

Pursuant to Article 72a, the Commission may impose on providers of GPAIs fines of up to 3% of its total worldwide turnover in the preceding financial year or 15 million EUR, whichever is higher, if it finds that the provider of a GPAI intentionally or negligently:

  • Infringes the provisions of the EU AI Act that are relevant to GPAIs, primarily Title VIIIa,
  • Fails to comply with a request for document or information (Art.68i) or supplies incorrect, incomplete, or misleading information,
  • Fails to comply with a measure requested under Article 68k,
  • Fails to make available to the Commission access to the GPAI model or GPAI with systemic risk with a view to conducting an evaluation (Art.68).

Administrative fines against Union bodies

According to Article 72, the European Data Protection Supervisor can also impose administrative fines on Union agencies, bodies, and institutions. Fines could be up to €1,500,000 for non-compliance with the prohibitions of the Act and €750,000 for non-compliance with obligations other than those laid down in Article 5.

How are penalties decided?

The general principle of the AI Act is that penalties shall be effective, dissuasive, and proportionate to the type of offense, previous actions, and profile of the offender. As such, the EU AI Act acknowledges that each case is individual and designates the fines as a maximum threshold, although lower penalties can be issued depending on the severity of the offense. Factors that may be considered when determining penalties include:

  • The nature, gravity, and duration of the offense,
  • The intentional or negligent character of infringements,
  • Any actions to mitigate the effects,
  • Previous fines,
  • The size, annual turnover, and market share of the offender,
  • Any financial gain or loss resulting from the offense,
  • Whether the use of the system is for professional or personal activity.

The EU AI Act also emphasizes the proportionality approach for SMEs and start-ups, who receive lower penalties guided by their size, interest, and economic viability.

Who decides the penalties?

There is no union-wide central authority for imposing fines on AI operators. As the Member States must implement the provisions of infringements into the national law, it depends on the national legal system of the Member States whether to impose fines by competent courts or other bodies. Recital (84) also points out that the implementation of penalties is in respect of the ne bis in idem principle, which means no defendant can be sanctioned twice for the same offense. Member States should consider the margins and criteria set out in the Act.

On the other hand, for the providers of GPAI models and for the Union bodies, the fines are imposed by the Commission and the European Data Protection Supervisor, respectively.

Getting started with your compliance journey

The best way to ensure that your systems are in compliance with the Act to avoid penalties is to take steps early. No matter the stage of development of the system, a risk management framework can be developed and implemented to prevent potential future harm. Getting ahead of this regulation will help you to embrace your AI with confidence. Schedule a call to find out more about how Holistic AI’s software platform and team of experts can help you manage the risks of your AI.

Last updated 14 February 2024.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Take command of your AI ecosystem

Learn more

Track AI Regulations in Real-time

Learn more
Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Get a demo