First proposed on 21st of April 2021, the European Commission’s proposed Harmonised Rules on Artificial Intelligence, colloquially known as the EU AI Act, seeks to lead the world in AI regulation. Likely to become the global gold standard for AI regulation, much like the general data protection regulations did for privacy regulation, the rules aim to create an ‘ecosystem of trust’ that manages AI risk and prioritises human rights in the development and deployment of AI.
Since first being proposed, an extensive consultation process has resulted in a number of amendments being proposed to the rules in the form of compromise texts. Among those shaping this regulation are the European Council and the French Presidency, who have both published compromise texts. The latest and likely final version came from the Czech Presidency with their General Approach text published on the 25th of November.
While AI and the associated automation can offer many benefits, such as increased efficiency and accuracy, using AI also poses novel technical, legal, and ethical risks. Scandals have affected multiple high-risk applications of AI:
While existing laws do somewhat apply to AI, such as GDPR, they alone are not sufficient to prevent AI from causing harm.
Under the Act, providers of AI systems established in the EU must comply with the regulation, along with those in third countries that place AI systems on the market in the EU, and those located in the EU that use AI systems. It also applies to providers and users based in third countries if the output of the system is used within the EU. Exempt from the regulation include those who use AI systems for military purposes and public authorities in third countries.
The regulation uses a risk-based approach, to determine both the obligations and penalties for different types of systems. Accordingly, systems are classified into having low or minimal risk, limited risk, high risk, or unacceptable risk.
• Low risk systems include spam filters or AI-enabled video games, and comprise the majority of the systems currently being used on the market. These systems do not have any obligations under the rules in their current form.
• Systems with limited risk are those that i) interact with humans, ii) detect humans or determine a person’s categorisation based on biometric data, or iii) produce manipulated content. These systems include chatbots and those used to produce deep fakes.
• High-risk systems are ones that can have a significant impact on the life chances of a user. These systems have stringent requirements that must be followed before they can be deployed on the EU market.
• Systems with unacceptable risk are banned from sale on the EU Market and include those that:
However, the classification of systems used in insurance as high-risk is proving to be a debated topic. While the original proposal did not include these systems in the initial list of high-risk use cases, the Slovenian Presidency Compromise Text added systems used in insurance premium setting, underwritings and claims assessments as an additional use case under the public and private services section. Taking a more targeted approach, the Czech Presidency’s fourth compromise text proposes that only systems used for these purposes concerning health and life insurance constitute high-risk.
Eight use cases fall into this category:
These systems are subject to more stringent requirements than any other category, including the use of high-quality data, appropriate documentation practices, transparency, adequate human oversight, and testing for accuracy and robustness. As well as these requirements, identifications made by biometric systems cannot be used to inform actions or decisions unless the identification is verified by at least two people.
Following any major changes to the system, such as if the model is retrained on new data or some features are removed from the model, the system must then undergo additional conformity assessments to ensure that the requirements are still being met, before being re-certified and registered in the database.
The EU AI Act’s sector-agnostic approach will help to ensure that there are consistent standards across the board to regulate AI. The rules impose obligations that are proportionate with the risk of the system, ensuring that potentially harmful systems are not deployed in the EU, while those associated with little or no risk can be used freely. Those that pose a high-risk will be constrained accordingly, while not preventing opportunities for innovation and development.
It is imperative that organisations that may fall under the EU AI Act are aware of their obligations, as non-compliance comes with heavy penalties of up to €30 million or 6% of global turnover, whichever is higher. The severity of the fines depends on the level of the transgression, ranging from using prohibited automated systems at the high end; to supplying incorrect, incomplete, or misleading information, at the low end. For further information about charities associated with the Act, you can read our Penalties of the EU AI Act blog post.
While the rules will only be enforced in the EU, they are likely to be adopted globally and have been termed GDPR for AI. Regulation like this will soon mean that AI worldwide is deployed more safely with greater accountability.
Get in touch with us at we@holisticai.com to find out more about how we can help you prepare for this and other upcoming regulations.
Written by Airlie Hilliard, Senior Researcher at Holistic AI. Follow her on Linkedin.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.
Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AI
Get Started