Penalties of the EU AI Act: The High Cost of Non-Compliance
AI Regulations

Penalties of the EU AI Act: The High Cost of Non-Compliance

November 24, 2022

Key Takeaways

  • The regulation sets forward a three-level structure for fines against infringements. There is a descending order in the proposed severity of fines.
  • The heftiest fines are imposed for violating the prohibition of specific AI systems, up to 30 million Euros or 6% turnover.
  • A substantial part of the infringements are related to High-Risk systems and every actor within its life-cycle; not only providers but users, importers, distributors, and notified bodies are under the threat of high sanctions.
  • SMEs and start-ups are given lower fines.
  • The penalty regime reflects the significance given to High-Risk systems.

Penalties of the EU AI Act for violations

The European Commission is aims to lead the world in Artificial Intelligence (AI) regulation with its proposal for Harmonised Rules on Artificial Intelligence (known as the EU AI Act). It seeks to lay out a normative framework so that risks of AI systems are managed and mitigated to ensure building trust in AI systems in the EU. The Regulation proposes a risk-based classification for AI systems, which defines four levels of risk: minimal, limited, high, and unacceptable. Thus, the proposal prioritises the fundamental rights of individuals and risks against these rights. The Proposal for the EU AI Act of April 2021 has undergone an extended consultation process and has had different amendments since then, such as the Parliament’s reports and texts of the French and Czech presidencies.

The latest compromise text of the EU AI Act (released 3 November 2022) (Preparation for Coreper text) sets out penalties in Title X, named Confidentiality and Penalties under Article 71. These penalties can be grouped into three major themes:

Penalties under Article 71.

Below we provide an overview of the prohibitions and obligations of the act, an explanation of infringements and types of penalties. Then, outlining how the penalties are determined and by whom.

1. Penalties at a glance

  • Highlighting the importance the EU AI Act places on the regulation of high-risk and dangerous systems, infringements of the obligations of high-risk systems extend to many stakeholders within the AI lifecycle.
  • Therefore, the EU AI Act requires utmost diligence from all parties participating in a High-Risk AI's lifecycle. The requirements for these systems are lengthy, and the stakes are high, emphasizing that high risk does not mean high reward.
  • Adequate compliance with the regulatory requirements will require comprehensive risk management frameworks, technical expertise, and extensive knowledge of the Act.
  • Penalties are given for non-compliance with the prohibited practices, infringements to the obligations for high-risk systems, and infringement of the duty of cooperation with the competent national authorities.  

Penalties under Article 71

2. Non-compliance with the prohibitions

Article 71/3 refers to the prohibited practices under Article 5, which prohibits systems being made available on the EU market or supplied directly to users if they:

  1. Are deployed with the objective to or the effect of in order to materially distorting causes or is reasonably likely to cause that person or another person physical or psychological harm;
  1. Exploit any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, or a specific social or economic situation, with the objective to or the effect of in order to materially distorting the behavior of a person pertaining to that group in a manner that causes or is reasonably likely to cause that person or another person physical or psychological harm;
  1. Evaluate or classify of the trustworthiness of natural persons over a certain period of time-based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:

(i) Detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;

(ii) Detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;

  1. Are used as remote biometric identification systems in publicly accessible spaces by law enforcement authorities or on their behalf for the purpose of law enforcement, unless and in as far as such use is strictly necessary for counted reasons or without the precautions and authorizations that are laid out in the article.

Non-compliance with these prohibitions is subject to fines of up to 30 million Euros or up to 6% of annual worldwide turnover for companies, whichever is higher (For SMEs and start-ups, fines shall be up to 3% of worldwide annual turnover).

3. Infringements to the obligations

3.1. What are the obligations?

The EU AI Act sets out several obligations for different parties involved in the deployment and use of AI systems:

  1. Obligations of providers under Article 4b

Article 4b outlines the obligations for providers of general-purpose AI systems, with a focus on those that are used as high-risk systems or as components of one.

  • According to Article (3)/1(b):
  • ‘general purpose AI system’ means an AI system that - irrespective of how the modality in which it is placed on the market or put into service, including as open source software - is intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others; a general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems.
  • According to Article (3)/2:
  • ‘provider’ means a natural or legal person, public authority, agency, or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.

In short, if the general-purpose system is used as a component of high-risk systems, it shall comply with the high-risk requirements. These include the obligations for providers such as; risk management, data governance, technical documentation, record-keeping, transparency, human oversight and accuracy, robustness, and cybersecurity. Which is outlined in the following section. In article 4c, the Regulation makes an exception to these obligations if the provider of the general-purpose system explicitly in good faith excludes high-risk uses in the instructions or use information. If the provider is informed about misuse, they shall take measures against such misuse. Otherwise, only the prohibited practices, transparency requirements for specific systems, and voluntary codes of conduct apply to these exempt systems.

  1. Obligations for providers under Article 16

The following obligations apply to providers of High-Risk AI systems:

  • Risk management (Art.9) - Providers must continuously identify and analyze foreseeable risks for the health, safety, and fundamental rights of the individuals for the entire life-cycle of the system, and take suitable measures to minimize and eliminate the risks.
  • Data governance (Art.10) - The training, validating, and testing of data sets must meet the quality criteria, in terms of relevance, collection of the data process operations and examination of possible biases.  Data governance and management must ensure to the best extent possible the use of relevant and representative data sets.
  • Technical documentation (Art.11) - Technical documentation is needed to demonstrate compliance with the requirements and must be up to date. It should be a clear and comprehensive form of assessment regarding compliance.
  • Record-keeping (Art.12) - There must be an automatic recording of events over the duration of the lifecycle of the system. For tracing the appropriate functioning of the system and identification of possible breaches.
  • Transparency (Art.13) - Providers shall accompany clear instructions with the systems and clearly indicate the characteristics, capabilities, and limitations of the system.
  • Human oversight (Art.14) - There must be appropriate human interface tools implemented within the systems so that, the system can be overseen by a natural person. The natural person shall be able to understand, interpret and intervene  the system.
  • Accuracy robustness and cybersecurity (Art.15) - Throughout the lifecycle of the system, it must be resilient to fault and error, technically redundant, and have fail-safe plans. Technical solutions must be ready to take appropriate measures against attacks to vulnerabilities.

In addition, obligations also include registration, log-keeping, affixing CE marking, quality management, informing competent authorities, taking corrective action in the event of non-compliance, and indicating the provider's name.

  1. Obligations for certain other persons under Article 23a

Article 23a lays out the conditions for natural or legal persons to be considered providers of a High-Risk AI system under the regulation. The article refers to the obligations of the providers of High-Risk AI systems if the following conditions apply:

  • They put their trademark, substantially modify a High-Risk system placed on the market
  • They put into service a general-purpose AI system as a high-risk AI system
  • They deploy a general-purpose system as a component of a high-risk AI system.
  1. Obligations of importers under Article 26
  • According to Article (3)/6:
  • 'Importer’ means any natural or legal person established physically present or established in the Union that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the Union.”

In short, importers must ensure that the High-Risk AI system complies with the regulation. Most importantly, they shall verify that the conformity assessment, technical documentation, CE conformity marking, and established authorized representative are in line with the regulation.

  1. Obligations of distributors under Article 27
  • Article (3)/7 defines a distributor as:
  • "any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market without affecting its properties”

Similar to importers, distributors also must verify the conformity of the High-Risk AI systems to the Regulation and cooperate with the competent authorities.

  1. Obligations of users under Article 29
  • According to Article (3)/4:
  • “‘user’ means any natural or legal person, including a public authority, agency or other body, using an AI system under its under whose authority the system is used.”

Users of High-Risk AI systems must comply in accordance with the instructions, implement human oversight and monitor the operation of the high-risk AI system, keep the logs, take data protection provisions into account, and cooperate with national authorities. There are additional specific obligations for users that are financial institutions.

  1. Requirements and obligations of notified bodies under Article 33, 34(1), 34(3), 34(4), 34a
  • According to Article 3/(22):
  • “‘notified body’ means a conformity assessment body designated in accordance with this Regulation and other relevant Union harmonisation legislation.”

Notified bodies must be organisationally capable, have adequate personnel, and ensure confidence in conformity assessment. They must ensure the highest degree of professional competence and impartiality while carrying out tasks. Moreover, they shall be economically independent of the providers of High-Risk AI systems and other operators.

  1. Transparency obligation for providers and users under Article 52
  • If the AI system is designed and deployed to interact with natural persons, providers and users must inform natural persons in a clear and distinguishable manner that they are interacting with an AI system unless this is obvious from the point of view of a reasonable natural person who is reasonably well-informed, observant, and circumspect.
  • Systems used for biometric categorization, emotion recognition, and deep fake generation also must disclose this to natural persons.

The recipient of the fines are providers for the most part. However, penalties can also be imposed on users, importers, distributors, and even notified bodies. With the exceptions of the final two obligations, the obligations are related to High-Risk AI systems.

The last title of this article refers to the transparency requirements of specific AI systems, even though these systems are not classified as High-Risk systems, the infringement is listed at the same level of fines.

Infringement to the obligations is subject to fines up to 20 million Euros or up to 4% of total worldwide turnover for companies. (For SMEs and start-ups, up to 2% of annual worldwide turnover).

4. Supplying incorrect, incomplete, or misleading information to the authorities

Failure to supply the correct or incomplete information is a violation of  Article 23 of the Regulation, which necessitates cooperation with component authorities. Providers of high-risk AI systems shall, upon request by a competent national authority, provide that authorities with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out.

Replying with incorrect, incomplete, or misleading information to a request of national authorities or notified bodies is subject to fines up to 10 million Euros or 2% of the total worldwide turnover for companies (For SMEs and start-ups, up to 1% of total worldwide turnover).

5. Administrative fines against Union bodies

According to Article 72, the European Data Protection Supervisor can also impose administrative fines on Union agencies, bodies, and institutions. Fines could be up to 500,000 EUR for non-compliance with the prohibitions of the Act, and up to 250,000 EUR for infringements of any other obligations laid out in the regulation except data governance.

6. How are penalties decided?

The general principle of AIA is that penalties shall be effective, dissuasive, and proportionate  to the type of offence, previous actions, and profile of the offender.As such, the regulation acknowledges that each case is individual and designates the fines as a maximum threshold, although lower penalties can be issued depending on the severity of the offence. Factors that may be considered when determining penalties include:

  • The nature, gravity, and duration of the offence
  • The intentional or negligent character of infringements
  • Any actions to mitigate the effects
  • Previous fines
  • The size, annual turnover, and market share of the offender
  • Any financial gain or loss resulting from the offence
  • Whether the use of the system is for professional or personal activity

The regulation also emphasizes the proportionality approach for SMEs and start-ups, who receive lower penalties guided by their size, interest, and economic viability.

7. Who decides the penalties?

There is no union-wide central authority for imposing fines. As the Member States must implement the provisions of infringements into the national law, it depends on the national legal system of the Member States whether to impose fines by competent courts or other bodies. Recital (84) also points out that the implementation of penalties is in respect of the ne bis in idem principle, which means no defendant can be sanctioned twice for the same offence. Member States should consider the margins and criteria set out in the Regulation.

8. Getting started with your compliance journey

The best way to ensure that your systems are in compliance with the regulation to avoid penalties is to take steps early. No matter the stage of development of the system, a risk management framework can be developed and implemented to prevent potential future harms. Getting ahead of this regulation will help you to embrace your AI with confidence. Schedule a demo to find out more about how Holistic AI’s software platform and team of experts can help you manage the risks of your AI.

Written by Anıl Tahmisoğlu, LLM Candidate at Utrecht University. Follow him on Linkedin.

Manage risks. Embrace AI.

Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AI

Get Started