Regulating AI in the EU: What Businesses Need to Know About the AI Act

June 21, 2023
Authored by
Ayesha Gulley
Senior Policy Associate at Holistic AI
Airlie Hilliard
Senior Researcher at Holistic AI
Regulating AI in the EU: What Businesses Need to Know About the AI Act

On 14 June 2023, the European Parliament voted to move forward with the EU AI Act, which was previously voted through by European Committees on 11 May 2023. First proposed on 21 April 2021, the Act, formally known as Harmonised Rules on Artificial Intelligence, seeks to lead the world in AI regulation and create an ‘ecosystem of trust’ that manages AI risk and prioritises human rights in the development and deployment of AI. The first sweeping legislation of its kind, the Act will have implications for countless AI systems used in the EU – and the countdown for compliance has begun.

In this blog post, we give a high-level overview of what businesses need to know about who the AI Act will affect, how it will regulate AI in the EU, obligations for high-risk systems, and how you can start to prepare.

Who does the EU AI Act affect?

The EU AI Act is set to have implications for providers of AI systems used in the EU, whether they are located in the EU or a third country. The legislation also applies to deployers of AI that are established or located in the EU and distributors that make AI systems available on the EU market. There are also implications for entities that import AI systems from outside the EU, as well as product manufacturers and authorised representatives of providers and operators of AI systems. Therefore, the Act will have a global reach, affecting many parties around the world involved in the design, development, deployment, and use of AI systems within the EU.

In the interests of balancing innovation and safety, AI systems used in research, testing and development will be exempt from the legislation, providing that they are not tested in real-world conditions and that they respect fundamental rights and other legal obligations.

Public authorities of third countries and international organisations working within international agreements and systems exclusively for military purposes will also be excluded, along with AI components provided under free and open-source licenses unless they are foundational models.

{{EU}}

How is AI defined by the EU AI Act?

Under the EU AI Act, artificial intelligence is defined as:

“A machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”

This definition was revised in the May text to make it closer to the OECD definition of AI, creating greater standardisation on what AI means and therefore the systems in scope of AI regulation.

How will the EU Regulate AI?

EU citizens are at the heart of the regulation, which seeks to introduce safeguards to minimise preventable harm. However, the act also strives to ensure that the obligations do not stifle innovation. Accordingly, the EU AI Act takes a risk-based approach to regulating AI, where obligations are proportional to the risk posed by a system based on four risk categories:

  • Minimal risk – Comprising the majority of the systems available on the EU market, systems with minimal risk include spam filters or AI-enabled video games. Minimal risk systems have no associated obligations.
  • Limited risk – Systems with some level of risk such as deepfakes, which have transparency obligations where users must be informed that they are interacting with an AI system or generated/manipulated content.
  • High risk – Systems that have the potential to pose a significant risk of harm to the health, safety or fundamental rights of individuals. High-risk systems have the most stringent obligations.
  • Unacceptable risk – Systems that pose an unacceptable level of risk, including real-time biometric identification and systems that use subliminal techniques, which are prohibited from being made available or used in the EU
The EU Regulate AI

Is your AI system high-risk?

Under Article 6, a system is considered high-risk if it is intended to be used as a safety component of a product or if it is covered by the EU harmonisation legislation listed in Annex II and it is required to undergo a third-party conformity assessment related to health and safety risks.

This includes products covered by laws relating to the safety of toys, lifts, pressure equipment, and diagnostic medical devices, per Annex II.

Annex III also lists 8 use cases that are considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. To aid this evaluation, 6 months before the implementation of the law, the European Commission will provide guidance on the circumstances where outputs from these systems would pose a significant risk to the health, safety or fundamental rights of natural persons following a consultation with the AI Office and relevant stakeholders. The eight use cases are:

Biometric and biometric-based systems

Systems used for the biometric identification of individuals and systems used to infer the personal characteristics of individuals based on their biometric or biometric-based data. This includes emotion recognition systems but does not apply to identification systems used to confirm the identity of a specific person.

Systems for critical infrastructure

This category contains an additional criterion for a system to be considered high-risk – whether it poses a significant risk of harm to the environment. It includes systems used for the management and operation of road, rail, and air traffic (unless regulated by harmonised or sector-specific legislation) as well as systems intended to be used as safety components for the management and operation of water, gas, heating, electric, or critical digital infrastructure supply.

Education and vocational training systems

Systems used to determine access or influence decisions on admission or assignment to educational and vocational training institutions. In addition, systems used to assess students in or for admission to educational and training institutions would be in the scope of the legislation, as would systems used to determine or influence the appropriate level of education for an individual. Finally, systems used to monitor and detect prohibited student behaviour would be considered high-risk.

Systems influencing employment, worker management and access to self-employment

Systems intended to be used for recruitment or selection, are considered high-risk. This includes systems used to place targeted job ads, systems used to evaluate performance in an interview or test, and systems to screen or filter applications or evaluate candidates in tests or interviews. As well as systems used in hiring, those used to make decisions about promotion, termination, and task allocation based on behaviour or personal characteristics and systems to monitor and evaluate performance and behaviour would also be considered high-risk.

Systems affecting access and use of private and public services and benefits

AI systems used by or on behalf of public authorities to evaluate eligibility for benefits and services, including healthcare, housing, electricity, heating/cooling, and internet. This also includes systems used to grant, revoke, increase, or reclaim these benefits and services.

This also covers systems used for credit scoring, excluding systems used to detect financial fraud and systems used to make or influence decisions about eligibility for health and life insurance. Further, systems used to evaluate and classify emergency calls or used to dispatch or determine the priority of dispatch of first responders including by police, firefighters, and medical aid and emergency healthcare.

Systems used in law enforcement

AI systems used by or on behalf of law enforcement or by EU agencies or bodies as part of polygraphs or similar tools, to evaluate the reliability of evidence in the investigation or prosecution of criminal offences, for the profiling of individuals in the course of detection, investigation, or prosecution of criminal offences, or those used for crime analytics to search complex large data sets to identify unknown patterns or discover hidden relationships in the data.

Systems used in migration, asylum and border control management

Systems used on behalf of public authorities or by EU agencies – such as polygraphs or similar tools – to assess security, health or irregular immigration risks of an individual entering a Member State, to verify the authenticity of travel documents, and those used to assess applications for asylum, visa, and residence permits and associated complaints.

This also includes systems used to monitor, surveil, or process data for border management activities to detect, recognise, or identify individuals, and systems used to forecast or predict trends related to migration movement and border crossing.

Systems used in the administration of justice and democratic processes

AI systems used by or on behalf of a judicial authority to assist in the research and interpretation of facts and the law and applying the law to a set of facts. This also includes systems intended to be used to influence the voting behaviour of individuals or the outcome of an election or referendum, excluding systems whose outputs are not directly exposed to individuals, including systems used to organise, optimise and structure political campaigns.

A recent addition here is AI systems intended to be used by social media platforms designated as Very Large Online Platforms under the  Digital Services Act (currently platforms with more than 45 million users).

What are the obligations for high-risk AI systems?

The Obligations for High-Risk AI Systems

Obligations for high-risk systems vary by the type of entity associated with the system, but there are seven broad obligations to comply with:

  • A continuous and iterative risk management system must be established throughout the entire lifecycle of the system (Article 9)
  • Data governance practices should be established to ensure the data for the training, validation, and testing of systems are appropriate for the system’s intended purpose (Article 10)
  • Technical documentation should be drawn up before the system is put onto the market (Article 11)
  • Record-keeping should be facilitated by ensuring the system is capable of automatic recording of events (Article 12)
  • Systems should be developed in a way to allow the appropriate transparency and provision of information to users (Article 13)
  • Systems should be designed to allow appropriate human oversight to prevent or minimise risks to health, safety, or fundamental rights (Article 14)
  • There should be an appropriate level of accuracy, robustness and cybersecurity maintained throughout the lifecycle of the system (Article 15)

Providers of foundational models also have an additional requirement – they must provide a description of the data sources used in development. Additionally, biometric systems used to identify individuals must first have their output verified by at least two people with the necessary competence, training, and authority before they can be acted on.

Compliance with these obligations must be confirmed through a conformity assessment, with those passing the assessment required to bear the CE marking before they are placed on the market – a digital marking for digital systems and a physical marking for physical systems. These systems must also be registered in a public database. This procedure must be repeated in the event of any significant modifications to the system, such as if the model is retrained on new data or some features are removed from the model.

What are the penalties for non-compliance?

In addition to potential reputational damage resulting from non-compliance, the Act will also impose steep penalties of up to €30 million or 6% of global turnover (whichever is higher) for non-compliance. The severity of the fine will depend on the severity of the offence, with using prohibited systems at the high end and supplying incorrect, incomplete, or misleading information, at the low end, which can result in fines of up to €10 million or up to 2% of turnover. This is similar to the fines set out by GDPR, which are up to €20 million or 4% of total global turnover for severe violations or €10 million or 2% of global turnover for less serious offences. It is therefore vital that organisations are aware of their obligations to avoid financial and reputational impacts.

A global gold standard

The EU AI Act seeks to set the global standard for AI regulation, affecting entities around the world that operate in the EU or interact with the EU market, regardless of whether they are located in the EU. Its sector-agnostic approach will help to ensure that there are consistent standards across the board to regulate AI and that the obligations are proportionate with the risk of the system, guaranteeing that potentially harmful systems are not deployed in the EU, while those associated with little or no risk can be used freely. Those that pose a high risk will be constrained accordingly, while not preventing opportunities for innovation and development. EU citizens are put at the heart of the regulation, with an emphasis on protecting their fundamental rights and protecting them from preventable harm.

{{EU}}

How can companies prepare for the EU Commission's AI Act?

Compliance with the EU AI Act requires a significant commitment from businesses that are developing or deploying systems in the EU. The text of the Act is lengthy and navigating it is no easy feat. With around two and a half years to go until enforcement, it is crucial that businesses use this preparatory period wisely to build up their readiness. Ensuring compliance is a multi-dimensional task that demands the establishment of robust governance structures, building internal competencies, and implementing requisite technologies.

To prepare for compliance, companies must:

  1. Create an inventory of AI systems they develop and/or deploy in the EU and globally, noting the intended purpose of the system and its capabilities.
  1. Lay down clear governance procedures detailing the rules of engagement for AI use and compliance with the AI Act. This involves setting up transparent guidelines that align AI applications with the Act's provisions.
  1. Build competence by fostering an environment of understanding, application, and adherence to the AI Act rules. This involves acquiring and nurturing the requisite expertise to interpret and comply with the law.
  1. Implement the necessary technology to ensure the infrastructure is in place for businesses to efficiently meet the AI Act's demands.

How can Holistic AI help my organisation achieve compliance?

Holistic AI’s proprietary Governance platform is a world-class solution to AI risk management that can be implemented throughout the entire lifecycle of AI systems, minimising their risk from design right through to deployment. Broadly, there are three steps:

Three Steps: Holistic AI Help My Organisation

Holistic AI is dedicated to assisting organisations to achieve compliance with the EU AI Act through its comprehensive suite of solutions, having conducted over 1000 risk mitigations. Leveraging the power of the Holistic AI's Governance Platform can help you with:

AI Governance:

  • Effectively register, assess, and track AI use cases with confidence
  • Utilise a comprehensive framework to evaluate the risk level of your AI systems
  • Identify areas for compliance improvement and prioritize necessary actions

AI Compliance:

  • Stay up to date with AI policies and regulations, including the EU AI Act
  • Track and operationalise AI laws, regulations, and industry standards
  • Ensure that your AI systems align with the existing and emerging compliance requirements

Integration and Workflow Enhancement:

  • Simplify the governance process by seamlessly integrating with various AI systems and tools
  • Incorporate responsible AI practices into your existing workflows effortlessly
  • Streamline compliance efforts by leveraging the platform's integration capabilities

Transparency:

  • Enhance trust and transparency by utilising customisable report templates
  • Effectively communicate the details of your AI systems to regulators, customers, and other stakeholders
  • Demonstrate compliance efforts and responsible AI practices through transparent reporting

Schedule a demo to find out more about how Holistic AI can help you prepare to be compliant.

Are you ready for the EU AI Act?

Avoid hefty penalties and achieve AI governance

Learn More

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call