Penalties of the EU AI Act: The High Cost of Non-Compliance
AI Regulations

Penalties of the EU AI Act: The High Cost of Non-Compliance

August 14, 2023

The European Commission aims to lead the world in Artificial Intelligence (AI) regulation with its proposal for Harmonised Rules on Artificial Intelligence (known as the EU AI Act). It seeks to lay out a normative framework so that risks of AI systems are managed and mitigated to build trust in AI systems used in the EU and protect the fundamental rights of EU citizens. The Regulation proposes a risk-based classification for AI systems, which defines four levels of risk: minimal, limited, high, and unacceptable. Since first being proposed in 2021, the EU AI Act has undergone an extended consultation process and has had further amendments since then, such as the Parliament’s reports and texts of the French and Czech presidencies, with the General Approach adopted on 6 December 2022. Following that juncture, a significant amount of progress has been made, with the Act voted through by leading European Parliamentary Committees in May 2023 before being passed by the European Parliament on 14 June 2023, signaling the commencement of trilogues.

In this blog post, we outline the penalties of the AI Act under the most recent version of the text.

Key Takeaways

  • The regulation sets forward a four-tier structure for fines against infringements.
  • The heftiest fines are imposed for violating the prohibition of specific AI systems, up to 40,000.00 EUR or 7% turnover.
  • The lowest penalties are for providing incorrect, incomplete or misleading information, up to 5,000,000 EUR or 1% of annual worldwide turnover.
  • Penalties for non-compliance can be issued to providers, deployers, importers, distributors and notified bodies

Penalties of the EU AI Act for violations

Previously a three-tiered approach, the latest amendments of the European Parliament introduced a four-tier approach to penalties under Article 71, some of which surpass the hefty fines of GDPR.

Penalties of the EU AI Act

‍Tier 1: Non-compliance with the prohibitions

The heftiest fines are given for placing on the market or using systems prohibited under the AI Act due to the unacceptable level of risk that they pose. These are systems that:

  • Use subliminal techniques or are purposefully deceptive or manipulative with the objective of distorting behaviour in a way that is likely to cause significant harm
  • Exploit any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, or a specific social or economic situation, to materially distort their behaviour in a way that is likely to cause harm
  • Use biometric information to categorise individual according to (inferred) protected characteristics
  • Evaluate or classify individuals over a period of time as a basis for social scoring if it leads to unfavourable treatment outside of the context where the data was collected or if the resulting treatment is disproportionate to their behaviour
  • Are used as remote biometric identification systems in publicly accessible spaces

Non-compliance with these prohibitions is subject to fines of up to 40.000.000 EUR or up to 7% of annual worldwide turnover for companies.‍ This surpasses the penalties under GDPR, therefore imposing some of the heftiest penalties for non-compliance in the EU.

Obligations under the AI Act

The EU AI Act sets out several obligations for different parties involved in the deployment and use of AI systems, with a particular focus on providers of AI systems. There are two tiers of penalties associated with non-compliance with obligations.

Obligations of providers under Article 4b

Providers are defined as entities that develop an AI system with the purpose of placing it on the market or putting it into service under its own name or trademark, whether this is free of charge or adapts a general purpose system. A general purpose system is a system that can be adapted to a range of applications that it was not intentionally designed for. Article 4b outlines the obligations for providers of general-purpose AI systems, with a focus on those that are used as high-risk systems or as components of one.

  • In short, if the general-purpose system is used as a component of high-risk systems, it must comply with the high-risk requirements. However, there is an exception to these obligations if the provider of the general-purpose system explicitly excludes high-risk uses in the instructions or use information. If the provider is informed about misuse, they shall take measures against such misuse. Otherwise, only the prohibited practices, transparency requirements for specific systems, and voluntary codes of conduct apply to these exempt systems.

Obligations for providers under Article 16

The following obligations apply to providers of High-Risk AI systems:

  • Risk management (Art.9) - Providers must continuously identify and analyze foreseeable risks for the health, safety, and fundamental rights of the individuals for the entire life cycle of the system, and take suitable measures to minimize and eliminate the risks.
  • Data governance (Art.10) - The training, validating, and testing of data sets must meet the quality criteria, in terms of relevance, collection of the data process operations and examination of possible biases.  Data governance and management must ensure to the best extent possible the use of relevant and representative data sets.
  • Technical documentation (Art.11) - Technical documentation is needed to demonstrate compliance with the requirements and must be up to date. It should be a clear and comprehensive form of assessment regarding compliance.
  • Record-keeping (Art.12) - There must be an automatic recording of events over the duration of the life cycle of the system. For tracing the appropriate functioning of the system and identification of possible breaches.
  • Transparency (Art.13) - Providers shall accompany clear instructions with the systems and clearly indicate the characteristics, capabilities, and limitations of the system.
  • Human oversight (Art.14) - There must be appropriate human interface tools implemented within the systems so that, the system can be overseen by a natural person. The natural person shall be able to understand, interpret and intervene  the system.
  • Accuracy robustness and cybersecurity (Art.15) - Throughout the life cycle of the system, it must be resilient to fault and error, technically redundant, and have fail-safe plans. Technical solutions must be ready to take appropriate measures against attacks to vulnerabilities.

In addition, obligations also include registration, log-keeping, affixing CE marking, quality management, informing competent authorities, taking corrective action in the event of non-compliance, and indicating the provider's name.

Obligations for certain other persons under Article 23a

Article 23a lays out the conditions for natural or legal persons to be considered providers of a High-Risk AI system under the regulation. The article refers to the obligations of the providers of High-Risk AI systems if the following conditions apply:

  • They put their trademark, substantially modify a High-Risk system placed on the market
  • They put into service a general-purpose AI system as a high-risk AI system
  • They deploy a general-purpose system as a component of a high-risk AI system.

Obligations of importers under Article 26

  • According to Article (3)/6:
  • “’importer’ means any natural or legal person established physically present or established in the Union that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the Union”

In short, importers must ensure that the High-Risk AI system complies with the regulation. Most importantly, they shall verify that the conformity assessment, technical documentation, CE conformity marking, and established authorized representative are in line with the regulation.

Obligations of distributors under Article 27

Article (3)/7 defines a distributor as an entity in the supply chain, other than the provider or importer, that makes an AI system available on the EU market.

  • Similar to importers, distributors also must verify the conformity of the High-Risk AI systems to the Regulation and cooperate with the competent authorities.

Obligations of deployers under Article 29

  • A deployer (formerly user) is an entity that uses an AI system under its authority. Deployers of High-Risk AI systems must comply in accordance with the instructions, implement human oversight and monitor the operation of the high-risk AI system, keep the logs, take data protection provisions into account, and cooperate with national authorities. There are additional specific obligations for deployers that are financial institutions.

Requirements and obligations of notified bodies under Article 33, 34(1), 34(3), 34(4), 34a

A notified body is a conformity assessment entity that has been designated in accordance with the AI Act and other relevant Union harmonisation legislation.

  • Such bodies must be organisationally capable, have adequate personnel, and ensure confidence in conformity assessment. They must ensure the highest degree of professional competence and impartiality while carrying out tasks, and shall be economically independent of the providers of High-Risk AI systems and other operators.

Transparency obligation for providers and users under Article 52

If the AI system is designed and deployed to interact with natural persons, providers and users must inform natural persons in a clear and distinguishable manner that they are interacting with an AI system unless this is obvious from the point of view of a reasonable natural person who is reasonably well-informed, observant, and circumspect.

Systems used for biometric categorization, emotion recognition, and deep fake generation also must disclose this to natural persons.

Tiers 2 and 3: Infringements to obligations

The AI Act sets out two tiers of penalties for infringement of the obligations. The recipient of the fines are providers for the most part. However, penalties can also be imposed on users, importers, distributors, and even notified bodies.

The higher tier is for infringements to Article 10 (data and data governance) and Article 13 (transparency), with fines of up to 20 million euros or 4% of total worldwide annual turnover if the offender is a company.

The lower tier is for non-compliance of AI systems or foundational models with any other requirements or obligations than those laid out in articles 5, 10, and 13, with penalties of up to 10 million euros or 2% of annual worldwide turnover if the offender is a company.

Tier 4: Supplying incorrect, incomplete, or misleading information to the authorities

Failure to supply the correct or incomplete information is a violation of Article 23 of the Regulation, which necessitates cooperation with component authorities. Providers of high-risk AI systems shall, upon request by a competent national authority, provide authorities with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out.

Replying with incorrect, incomplete, or misleading information to a request of national authorities or notified bodies is subject to fines up to 5 hundred thousand euros or 1% of the total worldwide turnover for companies.

{{CTA}}

Administrative fines against Union bodies

According to Article 72, the European Data Protection Supervisor can also impose administrative fines on Union agencies, bodies, and institutions. Fines could be up to 1.5 million euros for non-compliance with the prohibitions of the Act, 1 million euros for non-compliance with Article 10, and up to 750 thousand euros for non–compliance with obligations other than those laid down in Articles 5 and 10.

How are penalties decided?

The general principle of AIA is that penalties shall be effective, dissuasive, and proportionate to the type of offence, previous actions, and profile of the offender. As such, the regulation acknowledges that each case is individual and designates the fines as a maximum threshold, although lower penalties can be issued depending on the severity of the offence. Factors that may be considered when determining penalties include:

  • The nature, gravity, and duration of the offence
  • The intentional or negligent character of infringements
  • Any actions to mitigate the effects
  • Previous fines
  • The size, annual turnover, and market share of the offender
  • Any financial gain or loss resulting from the offence
  • Whether the use of the system is for professional or personal activity

The regulation also emphasizes the proportionality approach for SMEs and start-ups, who receive lower penalties guided by their size, interest, and economic viability.

Who decides the penalties?

There is no union-wide central authority for imposing fines. As the Member States must implement the provisions of infringements into the national law, it depends on the national legal system of the Member States whether to impose fines by competent courts or other bodies. Recital (84) also points out that the implementation of penalties is in respect of the ne bis in idem principle, which means no defendant can be sanctioned twice for the same offence. Member States should consider the margins and criteria set out in the Regulation.

Getting started with your compliance journey

The best way to ensure that your systems are in compliance with the regulation to avoid penalties is to take steps early. No matter the stage of development of the system, a risk management framework can be developed and implemented to prevent potential future harms. Getting ahead of this regulation will help you to embrace your AI with confidence. Schedule a call to find out more about how Holistic AI’s software platform and team of experts can help you manage the risks of your AI.

Written by Anıl Tahmisoğlu, LLM Candidate at Utrecht University. Follow him on Linkedin.

Edited by Siddhant Chatterjee, Public Policy Associate at Holistic AI, and Airlie Hilliard, Senior Researcher at Holistic AI.

Last updated 14 August 2023.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Are you ready for the EU AI Act?

Avoid hefty penalties and achieve AI governance

Learn More

Manage risks. Embrace AI.

Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AI

Get Started