The European Commission is aims to lead the world in Artificial Intelligence (AI) regulation with its proposal for Harmonised Rules on Artificial Intelligence (known as the EU AI Act). It seeks to lay out a normative framework so that risks of AI systems are managed and mitigated to ensure building trust in AI systems in the EU. The Regulation proposes a risk-based classification for AI systems, which defines four levels of risk: minimal, limited, high, and unacceptable. Thus, the proposal prioritises the fundamental rights of individuals and risks against these rights. The Proposal for the EU AI Act of April 2021 has undergone an extended consultation process and has had different amendments since then, such as the Parliament’s reports and texts of the French and Czech presidencies.
The latest compromise text of the EU AI Act (released 3 November 2022) (Preparation for Coreper text) sets out penalties in Title X, named Confidentiality and Penalties under Article 71. These penalties can be grouped into three major themes:
Below we provide an overview of the prohibitions and obligations of the act, an explanation of infringements and types of penalties. Then, outlining how the penalties are determined and by whom.
Article 71/3 refers to the prohibited practices under Article 5, which prohibits systems being made available on the EU market or supplied directly to users if they:
(i) Detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;
(ii) Detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;
Non-compliance with these prohibitions is subject to fines of up to 30 million Euros or up to 6% of annual worldwide turnover for companies, whichever is higher (For SMEs and start-ups, fines shall be up to 3% of worldwide annual turnover).
The EU AI Act sets out several obligations for different parties involved in the deployment and use of AI systems:
Article 4b outlines the obligations for providers of general-purpose AI systems, with a focus on those that are used as high-risk systems or as components of one.
In short, if the general-purpose system is used as a component of high-risk systems, it shall comply with the high-risk requirements. These include the obligations for providers such as; risk management, data governance, technical documentation, record-keeping, transparency, human oversight and accuracy, robustness, and cybersecurity. Which is outlined in the following section. In article 4c, the Regulation makes an exception to these obligations if the provider of the general-purpose system explicitly in good faith excludes high-risk uses in the instructions or use information. If the provider is informed about misuse, they shall take measures against such misuse. Otherwise, only the prohibited practices, transparency requirements for specific systems, and voluntary codes of conduct apply to these exempt systems.
The following obligations apply to providers of High-Risk AI systems:
In addition, obligations also include registration, log-keeping, affixing CE marking, quality management, informing competent authorities, taking corrective action in the event of non-compliance, and indicating the provider's name.
Article 23a lays out the conditions for natural or legal persons to be considered providers of a High-Risk AI system under the regulation. The article refers to the obligations of the providers of High-Risk AI systems if the following conditions apply:
In short, importers must ensure that the High-Risk AI system complies with the regulation. Most importantly, they shall verify that the conformity assessment, technical documentation, CE conformity marking, and established authorized representative are in line with the regulation.
Similar to importers, distributors also must verify the conformity of the High-Risk AI systems to the Regulation and cooperate with the competent authorities.
Users of High-Risk AI systems must comply in accordance with the instructions, implement human oversight and monitor the operation of the high-risk AI system, keep the logs, take data protection provisions into account, and cooperate with national authorities. There are additional specific obligations for users that are financial institutions.
Notified bodies must be organisationally capable, have adequate personnel, and ensure confidence in conformity assessment. They must ensure the highest degree of professional competence and impartiality while carrying out tasks. Moreover, they shall be economically independent of the providers of High-Risk AI systems and other operators.
The recipient of the fines are providers for the most part. However, penalties can also be imposed on users, importers, distributors, and even notified bodies. With the exceptions of the final two obligations, the obligations are related to High-Risk AI systems.
The last title of this article refers to the transparency requirements of specific AI systems, even though these systems are not classified as High-Risk systems, the infringement is listed at the same level of fines.
Infringement to the obligations is subject to fines up to 20 million Euros or up to 4% of total worldwide turnover for companies. (For SMEs and start-ups, up to 2% of annual worldwide turnover).
Failure to supply the correct or incomplete information is a violation of Article 23 of the Regulation, which necessitates cooperation with component authorities. Providers of high-risk AI systems shall, upon request by a competent national authority, provide that authorities with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out.
Replying with incorrect, incomplete, or misleading information to a request of national authorities or notified bodies is subject to fines up to 10 million Euros or 2% of the total worldwide turnover for companies (For SMEs and start-ups, up to 1% of total worldwide turnover).
According to Article 72, the European Data Protection Supervisor can also impose administrative fines on Union agencies, bodies, and institutions. Fines could be up to 500,000 EUR for non-compliance with the prohibitions of the Act, and up to 250,000 EUR for infringements of any other obligations laid out in the regulation except data governance.
The general principle of AIA is that penalties shall be effective, dissuasive, and proportionate to the type of offence, previous actions, and profile of the offender.As such, the regulation acknowledges that each case is individual and designates the fines as a maximum threshold, although lower penalties can be issued depending on the severity of the offence. Factors that may be considered when determining penalties include:
The regulation also emphasizes the proportionality approach for SMEs and start-ups, who receive lower penalties guided by their size, interest, and economic viability.
There is no union-wide central authority for imposing fines. As the Member States must implement the provisions of infringements into the national law, it depends on the national legal system of the Member States whether to impose fines by competent courts or other bodies. Recital (84) also points out that the implementation of penalties is in respect of the ne bis in idem principle, which means no defendant can be sanctioned twice for the same offence. Member States should consider the margins and criteria set out in the Regulation.
The best way to ensure that your systems are in compliance with the regulation to avoid penalties is to take steps early. No matter the stage of development of the system, a risk management framework can be developed and implemented to prevent potential future harms. Getting ahead of this regulation will help you to embrace your AI with confidence. Schedule a demo to find out more about how Holistic AI’s software platform and team of experts can help you manage the risks of your AI.
Written by Anıl Tahmisoğlu, LLM Candidate at Utrecht University. Follow him on Linkedin.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AIGet Started