The European Commission aims to lead the world in Artificial Intelligence (AI) regulation with its proposal for Harmonised Rules on Artificial Intelligence (known as the EU AI Act). It seeks to lay out a normative framework so that risks of AI systems are managed and mitigated to build trust in AI systems used in the EU and protect the fundamental rights of EU citizens. The Regulation proposes a risk-based classification for AI systems, which defines four levels of risk: minimal, limited, high, and unacceptable. Since first being proposed in 2021, the EU AI Act has undergone an extended consultation process and has had further amendments since then, such as the Parliament’s reports and texts of the French and Czech presidencies, with the General Approach adopted on 6 December 2022. Following that juncture, a significant amount of progress has been made, with the Act voted through by leading European Parliamentary Committees in May 2023 before being passed by the European Parliament on 14 June 2023, signaling the commencement of trilogues.
In this blog post, we outline the penalties of the AI Act under the most recent version of the text.
Key Takeaways
Previously a three-tiered approach, the latest amendments of the European Parliament introduced a four-tier approach to penalties under Article 71, some of which surpass the hefty fines of GDPR.
The heftiest fines are given for placing on the market or using systems prohibited under the AI Act due to the unacceptable level of risk that they pose. These are systems that:
Non-compliance with these prohibitions is subject to fines of up to 40.000.000 EUR or up to 7% of annual worldwide turnover for companies. This surpasses the penalties under GDPR, therefore imposing some of the heftiest penalties for non-compliance in the EU.
The EU AI Act sets out several obligations for different parties involved in the deployment and use of AI systems, with a particular focus on providers of AI systems. There are two tiers of penalties associated with non-compliance with obligations.
Providers are defined as entities that develop an AI system with the purpose of placing it on the market or putting it into service under its own name or trademark, whether this is free of charge or adapts a general purpose system. A general purpose system is a system that can be adapted to a range of applications that it was not intentionally designed for. Article 4b outlines the obligations for providers of general-purpose AI systems, with a focus on those that are used as high-risk systems or as components of one.
The following obligations apply to providers of High-Risk AI systems:
In addition, obligations also include registration, log-keeping, affixing CE marking, quality management, informing competent authorities, taking corrective action in the event of non-compliance, and indicating the provider's name.
Article 23a lays out the conditions for natural or legal persons to be considered providers of a High-Risk AI system under the regulation. The article refers to the obligations of the providers of High-Risk AI systems if the following conditions apply:
In short, importers must ensure that the High-Risk AI system complies with the regulation. Most importantly, they shall verify that the conformity assessment, technical documentation, CE conformity marking, and established authorized representative are in line with the regulation.
Article (3)/7 defines a distributor as an entity in the supply chain, other than the provider or importer, that makes an AI system available on the EU market.
A notified body is a conformity assessment entity that has been designated in accordance with the AI Act and other relevant Union harmonisation legislation.
If the AI system is designed and deployed to interact with natural persons, providers and users must inform natural persons in a clear and distinguishable manner that they are interacting with an AI system unless this is obvious from the point of view of a reasonable natural person who is reasonably well-informed, observant, and circumspect.
Systems used for biometric categorization, emotion recognition, and deep fake generation also must disclose this to natural persons.
The AI Act sets out two tiers of penalties for infringement of the obligations. The recipient of the fines are providers for the most part. However, penalties can also be imposed on users, importers, distributors, and even notified bodies.
The higher tier is for infringements to Article 10 (data and data governance) and Article 13 (transparency), with fines of up to 20 million euros or 4% of total worldwide annual turnover if the offender is a company.
The lower tier is for non-compliance of AI systems or foundational models with any other requirements or obligations than those laid out in articles 5, 10, and 13, with penalties of up to 10 million euros or 2% of annual worldwide turnover if the offender is a company.
Failure to supply the correct or incomplete information is a violation of Article 23 of the Regulation, which necessitates cooperation with component authorities. Providers of high-risk AI systems shall, upon request by a competent national authority, provide authorities with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out.
Replying with incorrect, incomplete, or misleading information to a request of national authorities or notified bodies is subject to fines up to 5 hundred thousand euros or 1% of the total worldwide turnover for companies.
{{CTA}}
According to Article 72, the European Data Protection Supervisor can also impose administrative fines on Union agencies, bodies, and institutions. Fines could be up to 1.5 million euros for non-compliance with the prohibitions of the Act, 1 million euros for non-compliance with Article 10, and up to 750 thousand euros for non–compliance with obligations other than those laid down in Articles 5 and 10.
The general principle of AIA is that penalties shall be effective, dissuasive, and proportionate to the type of offence, previous actions, and profile of the offender. As such, the regulation acknowledges that each case is individual and designates the fines as a maximum threshold, although lower penalties can be issued depending on the severity of the offence. Factors that may be considered when determining penalties include:
The regulation also emphasizes the proportionality approach for SMEs and start-ups, who receive lower penalties guided by their size, interest, and economic viability.
There is no union-wide central authority for imposing fines. As the Member States must implement the provisions of infringements into the national law, it depends on the national legal system of the Member States whether to impose fines by competent courts or other bodies. Recital (84) also points out that the implementation of penalties is in respect of the ne bis in idem principle, which means no defendant can be sanctioned twice for the same offence. Member States should consider the margins and criteria set out in the Regulation.
The best way to ensure that your systems are in compliance with the regulation to avoid penalties is to take steps early. No matter the stage of development of the system, a risk management framework can be developed and implemented to prevent potential future harms. Getting ahead of this regulation will help you to embrace your AI with confidence. Schedule a call to find out more about how Holistic AI’s software platform and team of experts can help you manage the risks of your AI.
Written by Anıl Tahmisoğlu, LLM Candidate at Utrecht University. Follow him on Linkedin.
Edited by Siddhant Chatterjee, Public Policy Associate at Holistic AI, and Airlie Hilliard, Senior Researcher at Holistic AI.
Last updated 14 August 2023.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AI
Get Started