×
Policy Hour
panel & networking
Online Webinar

Implementing the EU AI Act: Text to Practice

Wednesday, 20 March 2024 - 9am ET, 1pm GMT
Register for this event
Register for this event

Our March Policy Hour brought together Holistic AI’s Legal and Regulatory Lead, Osman Gucluturk, and Kai Zenner, the Head of Office and Digital Policy Adviser for MEP Axel Voss (European People’s Party Group) in the European Parliament to discuss the EU AI Act.  

The panel discussed the EU AI Act's progression, the current stance of the European Parliament regarding the most recent draft of the AI Act, the measures enterprises should take during the grace period to prepare for the Act, as well as strategies they should consider for sustained compliance post-implementation, and gave an overview on the future of the Act and its place in the next five to ten years.

Below we have included the full recording of the event, as well as the slides and Q&A.

Q&A


The legislation takes a risk-based approach to regulating AI, with systems subject to varying levels of obligations according to their risk classifications. High-risk applications have the most stringent obligations, while systems deemed to have an unacceptable level of risk are banned altogether.
Companies can be fined up to €35 million or 7% of global turnover depending on the infringement and size of the company. The lowest penalties for AI operators are for providing incorrect, incomplete, or misleading information - up to €7,500,000 or 1% of total worldwide annual turnover. Find out more about penalties here.

The following entities are covered within the EU AI Act and must prepare for the legislation:

  1. Providers of AI systems established in the EU.
  2. Providers located in third countries that place AI systems on the market in the EU.
  3. Providers located in the EU that use AI systems.
  4. Deployers based in third countries if the output of their AI systems is used within the EU.

There are some exemptions, such as pre-market AI research and testing, international public authorities, military AI systems, and most free/open-source AI components.

A practical starting point is to create a comprehensive inventory of your organisation's AI systems, whether developed or deployed within the EU or elsewhere. Organisations should establish explicit, transparent governance procedures and guidelines for AI applications that adhere to the Act's provisions.

Developing an internal culture that understands the Act and actively adheres to its provisions is crucial too. This requires ongoing education and awareness programs to familiarize employees with the Act's requirements and implications. Additionally, investing in expertise and talent acquisition in the field of AI compliance will be vital for maintaining regulatory adherence.

Moreover, it is essential for organisations to invest in the necessary technologies and infrastructure to meet the Act's provisions. This includes identifying and implementing appropriate tools and systems that facilitate compliance monitoring, data protection, and other essential requirements outlined by the legislation.

The Act was approved by the European Parliament on 13 March 2023 in a significant plenary session. The next and final step in this process involves securing the nod from the Council of the European Union, as per the EU's ordinary legislative procedure. The Council’s final approval of the text is anticipated in May after it has undergone further scrutiny and revision by legal specialists to ensure linguistic precision and legal coherence.

The Act's entry into force is distinct from its application. The application of the Act's provisions will be phased, affecting various sectors at different stages, ensuring a measured and thoughtful integration of this comprehensive piece of legislation into practice over the next two years.

Under the EU AI Act, the following systems are considered high-risk:

  1. Biometrics
  2. Critical infrastructure
  3. Educational and vocational training
  4. Employment, workers management, and access to self-employment
  5. Access to and enjoyment of essential private services and essential public services and benefits
  6. Law enforcement
  7. Profiling
  8. Migration, asylum, and border control management
  9. Administration of justice and democratic processes

Check out our guide to identifying whether your systems are high-risk here.

Under the EU AI Act, systems with unacceptable risk are prohibited. Specifically, these are:

  • Systems using ‘real-time’ remote biometric identification in publicly accessible spaces for the purposes of law enforcement
  • Systems using subliminal, manipulative, deceptive techniques with significant risk of harm
  • Systems inferring emotions in the areas of workplace and education
  • Systems of social scoring leading to detrimental or unfavourable treatment
  • Systems exploiting vulnerabilities and distorting behaviour with reasonable risk of significant harm
  • Systems predicting criminal offence based on profiling or personality traits
  • Systems based on untargeted scraping of personal images to create a facial recognition database
  • Biometric categorisation inferring race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation

Providers of such models have transparency obligations to disclose certain information to downstream system providers. Certain general purpose models could also pose systemic risks due to being widely used, and providers of models with systemic risks have further transparency, risk assessment, and mitigation obligations.

Just like the GDPR did for privacy, the AI Act will shine a spotlight on AI risk, significantly increasing awareness of the importance of responsible AI among businesses, regulators and the wider public.

Forward thinking enterprises should therefore act now to establish AI risk management frameworks, to minimise the legal, reputational and commercial damage which falling foul of the AI Act could result in.

Schedule a call to learn how Holistic AI can assist your organisation prepare for EU AI Act compliance.

Our March Policy Hour brought together Holistic AI’s Legal and Regulatory Lead, Osman Gucluturk, and Kai Zenner, the Head of Office and Digital Policy Adviser for MEP Axel Voss (European People’s Party Group) in the European Parliament to discuss the EU AI Act.  

The panel discussed the EU AI Act's progression, the current stance of the European Parliament regarding the most recent draft of the AI Act, the measures enterprises should take during the grace period to prepare for the Act, as well as strategies they should consider for sustained compliance post-implementation, and gave an overview on the future of the Act and its place in the next five to ten years.

Below we have included the full recording of the event, as well as the slides and Q&A.

Q&A


The legislation takes a risk-based approach to regulating AI, with systems subject to varying levels of obligations according to their risk classifications. High-risk applications have the most stringent obligations, while systems deemed to have an unacceptable level of risk are banned altogether.
Companies can be fined up to €35 million or 7% of global turnover depending on the infringement and size of the company. The lowest penalties for AI operators are for providing incorrect, incomplete, or misleading information - up to €7,500,000 or 1% of total worldwide annual turnover. Find out more about penalties here.

The following entities are covered within the EU AI Act and must prepare for the legislation:

  1. Providers of AI systems established in the EU.
  2. Providers located in third countries that place AI systems on the market in the EU.
  3. Providers located in the EU that use AI systems.
  4. Deployers based in third countries if the output of their AI systems is used within the EU.

There are some exemptions, such as pre-market AI research and testing, international public authorities, military AI systems, and most free/open-source AI components.

A practical starting point is to create a comprehensive inventory of your organisation's AI systems, whether developed or deployed within the EU or elsewhere. Organisations should establish explicit, transparent governance procedures and guidelines for AI applications that adhere to the Act's provisions.

Developing an internal culture that understands the Act and actively adheres to its provisions is crucial too. This requires ongoing education and awareness programs to familiarize employees with the Act's requirements and implications. Additionally, investing in expertise and talent acquisition in the field of AI compliance will be vital for maintaining regulatory adherence.

Moreover, it is essential for organisations to invest in the necessary technologies and infrastructure to meet the Act's provisions. This includes identifying and implementing appropriate tools and systems that facilitate compliance monitoring, data protection, and other essential requirements outlined by the legislation.

The Act was approved by the European Parliament on 13 March 2023 in a significant plenary session. The next and final step in this process involves securing the nod from the Council of the European Union, as per the EU's ordinary legislative procedure. The Council’s final approval of the text is anticipated in May after it has undergone further scrutiny and revision by legal specialists to ensure linguistic precision and legal coherence.

The Act's entry into force is distinct from its application. The application of the Act's provisions will be phased, affecting various sectors at different stages, ensuring a measured and thoughtful integration of this comprehensive piece of legislation into practice over the next two years.

Under the EU AI Act, the following systems are considered high-risk:

  1. Biometrics
  2. Critical infrastructure
  3. Educational and vocational training
  4. Employment, workers management, and access to self-employment
  5. Access to and enjoyment of essential private services and essential public services and benefits
  6. Law enforcement
  7. Profiling
  8. Migration, asylum, and border control management
  9. Administration of justice and democratic processes

Check out our guide to identifying whether your systems are high-risk here.

Under the EU AI Act, systems with unacceptable risk are prohibited. Specifically, these are:

  • Systems using ‘real-time’ remote biometric identification in publicly accessible spaces for the purposes of law enforcement
  • Systems using subliminal, manipulative, deceptive techniques with significant risk of harm
  • Systems inferring emotions in the areas of workplace and education
  • Systems of social scoring leading to detrimental or unfavourable treatment
  • Systems exploiting vulnerabilities and distorting behaviour with reasonable risk of significant harm
  • Systems predicting criminal offence based on profiling or personality traits
  • Systems based on untargeted scraping of personal images to create a facial recognition database
  • Biometric categorisation inferring race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation

Providers of such models have transparency obligations to disclose certain information to downstream system providers. Certain general purpose models could also pose systemic risks due to being widely used, and providers of models with systemic risks have further transparency, risk assessment, and mitigation obligations.

Just like the GDPR did for privacy, the AI Act will shine a spotlight on AI risk, significantly increasing awareness of the importance of responsible AI among businesses, regulators and the wider public.

Forward thinking enterprises should therefore act now to establish AI risk management frameworks, to minimise the legal, reputational and commercial damage which falling foul of the AI Act could result in.

Schedule a call to learn how Holistic AI can assist your organisation prepare for EU AI Act compliance.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call