×
Policy Hour

Policy Hour: Unpacking the Proposed EU AI Act

Thursday, 27 April 2023, at 09:00 am ET
share this
Register for this event

In the second edition of our Policy Hour webinar series, we unpacked the proposed EU AI Act. Our host, Holistic AI’s Co-Founder Dr Emre Kazim, was delighted to be joined by Partner and Global AI Lead at Simmons & Simmons LLP, Minesh Tanna.

Our experts delved into critical amendments and key topics such as the definitions of AI and "significant risk," as well as the implications of inferring sensitive data on gender and health. The panellists provided valuable context for these amendments and discussed their potential consequences.

Moreover, the panel discussed what this will mean in practice and whether the standards would be ready in time to adequately address the evolving landscape.

During the discussion, we received many questions from the audience which we were unfortunately not able to get through during the event, so we have put together a Q&A.

Below we have included the full recording of the event, as well as the slides and Q&A.

Unpacking the Proposed EU AI Act - PPT

Q&A


No, if you're using an AI system, you must follow the sector-specific, or you use case-specific legislation in addition to these new AI targeting laws.

Yes, it affects anyone who places on the market or puts into service an AI system within the EU whether they're physically present in the EU or whether they're in a third country, such as the UK.

These frameworks are currently voluntary but there is a requirement for high-risk systems to have some risk management framework established under the EU AI Act, so these can help to support the implementation of risk management frameworks while formal standards are being developed.

There is a lack of standardisation on how AI is defined, but most definitions agree that AI has some level of autonomy, human input, and can produce a range of outcomes using technology to do this. Check out our“Lost in Translation” blog where we compare the different definitions of AI – including the current definition under the AIA.

EU regulators aim to regulate various aspects of AI systems, from development and design to deployment. Their approach is not limited to data, but also targets explainability, bias, and broader risks such as governance and accountability.

It is true that the EU AI Act has faced criticism for the way it defines AI, with some arguing that previous definitions have been broad. While the AI definition remains under debate and may be updated to be more aligned with the OECD, defining AI and determining its scope is crucial for regulation.

The UK is likely to adopt a vertical, sector-specific approach to regulating automated decision making, while the EU favours a horizontal approach based on minimising risk rather than specific use cases or sectors.

UK-based firms with EU subsidiaries will likely adopt a cautious and conservative approach when developing models for both jurisdictions. By complying with the more stringent EU AI Act, they can ensure they meet the UK's lighter regulatory requirements. This prepares them for the worst-case scenario and encourages early compliance efforts.

To check compliance, AI systems must undergo conformity assessments, which can be internal or involve partnering with organizations like Holistic AI to ensure requirements are met. It is essential to consult your legal team. AI systems must pass these assessments and obtain the CE marking before entering the market.

If an AI system has varying risk levels across use cases, AI Act compliance involves multiple stakeholders: providers, users, importers, distributors, manufacturers, and authorized representatives. Entities like FCA-regulated firms must maintain relevant policies and procedures. If in doubt about liability or compliance, check with your counsel.

It is probable that firms regulated by the FCA, such as banks and insurers/brokers, will eventually need to introduce AI-specific policies and procedures. This development will assist with transparency obligations and align with the evolving nature of the field, even though it is not yet a widespread requirement.

The EU's approach tends to be cautious, and there has been debate about the AI definition being too broad. However, only a limited number of systems are prohibited or considered high-risk. While it may restrict some experimentation, the overall impact on the AI sector is likely to be more positive than inhibiting innovation.

The regulation of ChatGPT could vary depending on its classification. As a general-purpose AI system, it would have different obligations. However, if it is updated and tailored for specific uses, it may no longer be considered general-purpose. There have been discussions about classifying ChatGPT and other generative AI as high-risk systems, which could change the regulatory requirements. This remains unknown currently.

Businesses should begin by taking an inventory of their AI systems, identifying and mitigating risks. Early preparation is key for successful compliance. Holistic AI can help in this process – schedule a demo with us!

The EU member states are responsible for designating authorities to enforce these regulations within their jurisdictions. The European Parliament is currently discussing and debating the details of the AI regulatory framework.

Under the Act, providers of AI systems established in the EU must comply with the regulation, along with those in third countries that place AI systems on the market in the EU, and those located in the EU that use AI systems. It also applies to providers and users based in third countries if the output of the system is used within the EU. Exempt from the regulation include those who use AI systems for military purposes and public authorities in third countries.

The EU AI Act primarily focuses on safety, while copyright, privacy, and data protection are established fields within the EU Liability Act. There are indications that generative AI, like ChatGPT, could be included in the AI Act. In contrast, the US AI Bill of Rights is less comprehensive in comparison. The differences between these regulations may vary based on each jurisdiction's priorities and existing legal frameworks.

The UK's lighter, sector-specific approach contrasts with the EU's AI Act, as UK regulators will have distinct rules for different use cases. When developing and deploying AI systems, it is essential to be aware of the laws in the regions targeted. If a company chooses not to comply with the EU AI Act, it may be excluded from the EU market. These differences may affect UK-EU collaborations in AI development and deployment, as well as the transparency of AI models in different countries.

The EU AI Act emphasizes transparency regarding an AI system's capabilities, limitations, and intended use. While achieving 100% transparency may not be realistic, especially for deep neural networks, providers are expected to inform users as much as possible about the system's functionality and purpose. This is the essence of the transparency obligations outlined in the Act.

The EU AIA is set to be the gold standard in AI regulation. Given that many UK businesses are not solely focused on the UK market, it is likely they will be impacted by the AIA. Policymakers in the UK have been actively thinking about this ahead of the EU finalising the legislative process.

Essentially, yes. AI systems with minimal risk have no obligations, while low-risk or limited-risk systems are subject to transparency requirements in accordance with the EU AI Act.

Our Speakers

Emre Kazim, Co-founder and Co-CEO, Holistic AI

Minesh Tanna, Partner and Global AI Lead - Simmons & Simmons LLP

Agenda

Hosted by

No items found.

In the second edition of our Policy Hour webinar series, we unpacked the proposed EU AI Act. Our host, Holistic AI’s Co-Founder Dr Emre Kazim, was delighted to be joined by Partner and Global AI Lead at Simmons & Simmons LLP, Minesh Tanna.

Our experts delved into critical amendments and key topics such as the definitions of AI and "significant risk," as well as the implications of inferring sensitive data on gender and health. The panellists provided valuable context for these amendments and discussed their potential consequences.

Moreover, the panel discussed what this will mean in practice and whether the standards would be ready in time to adequately address the evolving landscape.

During the discussion, we received many questions from the audience which we were unfortunately not able to get through during the event, so we have put together a Q&A.

Below we have included the full recording of the event, as well as the slides and Q&A.

Unpacking the Proposed EU AI Act - PPT

Q&A


No, if you're using an AI system, you must follow the sector-specific, or you use case-specific legislation in addition to these new AI targeting laws.

Yes, it affects anyone who places on the market or puts into service an AI system within the EU whether they're physically present in the EU or whether they're in a third country, such as the UK.

These frameworks are currently voluntary but there is a requirement for high-risk systems to have some risk management framework established under the EU AI Act, so these can help to support the implementation of risk management frameworks while formal standards are being developed.

There is a lack of standardisation on how AI is defined, but most definitions agree that AI has some level of autonomy, human input, and can produce a range of outcomes using technology to do this. Check out our“Lost in Translation” blog where we compare the different definitions of AI – including the current definition under the AIA.

EU regulators aim to regulate various aspects of AI systems, from development and design to deployment. Their approach is not limited to data, but also targets explainability, bias, and broader risks such as governance and accountability.

It is true that the EU AI Act has faced criticism for the way it defines AI, with some arguing that previous definitions have been broad. While the AI definition remains under debate and may be updated to be more aligned with the OECD, defining AI and determining its scope is crucial for regulation.

The UK is likely to adopt a vertical, sector-specific approach to regulating automated decision making, while the EU favours a horizontal approach based on minimising risk rather than specific use cases or sectors.

UK-based firms with EU subsidiaries will likely adopt a cautious and conservative approach when developing models for both jurisdictions. By complying with the more stringent EU AI Act, they can ensure they meet the UK's lighter regulatory requirements. This prepares them for the worst-case scenario and encourages early compliance efforts.

To check compliance, AI systems must undergo conformity assessments, which can be internal or involve partnering with organizations like Holistic AI to ensure requirements are met. It is essential to consult your legal team. AI systems must pass these assessments and obtain the CE marking before entering the market.

If an AI system has varying risk levels across use cases, AI Act compliance involves multiple stakeholders: providers, users, importers, distributors, manufacturers, and authorized representatives. Entities like FCA-regulated firms must maintain relevant policies and procedures. If in doubt about liability or compliance, check with your counsel.

It is probable that firms regulated by the FCA, such as banks and insurers/brokers, will eventually need to introduce AI-specific policies and procedures. This development will assist with transparency obligations and align with the evolving nature of the field, even though it is not yet a widespread requirement.

The EU's approach tends to be cautious, and there has been debate about the AI definition being too broad. However, only a limited number of systems are prohibited or considered high-risk. While it may restrict some experimentation, the overall impact on the AI sector is likely to be more positive than inhibiting innovation.

The regulation of ChatGPT could vary depending on its classification. As a general-purpose AI system, it would have different obligations. However, if it is updated and tailored for specific uses, it may no longer be considered general-purpose. There have been discussions about classifying ChatGPT and other generative AI as high-risk systems, which could change the regulatory requirements. This remains unknown currently.

Businesses should begin by taking an inventory of their AI systems, identifying and mitigating risks. Early preparation is key for successful compliance. Holistic AI can help in this process – schedule a demo with us!

The EU member states are responsible for designating authorities to enforce these regulations within their jurisdictions. The European Parliament is currently discussing and debating the details of the AI regulatory framework.

Under the Act, providers of AI systems established in the EU must comply with the regulation, along with those in third countries that place AI systems on the market in the EU, and those located in the EU that use AI systems. It also applies to providers and users based in third countries if the output of the system is used within the EU. Exempt from the regulation include those who use AI systems for military purposes and public authorities in third countries.

The EU AI Act primarily focuses on safety, while copyright, privacy, and data protection are established fields within the EU Liability Act. There are indications that generative AI, like ChatGPT, could be included in the AI Act. In contrast, the US AI Bill of Rights is less comprehensive in comparison. The differences between these regulations may vary based on each jurisdiction's priorities and existing legal frameworks.

The UK's lighter, sector-specific approach contrasts with the EU's AI Act, as UK regulators will have distinct rules for different use cases. When developing and deploying AI systems, it is essential to be aware of the laws in the regions targeted. If a company chooses not to comply with the EU AI Act, it may be excluded from the EU market. These differences may affect UK-EU collaborations in AI development and deployment, as well as the transparency of AI models in different countries.

The EU AI Act emphasizes transparency regarding an AI system's capabilities, limitations, and intended use. While achieving 100% transparency may not be realistic, especially for deep neural networks, providers are expected to inform users as much as possible about the system's functionality and purpose. This is the essence of the transparency obligations outlined in the Act.

The EU AIA is set to be the gold standard in AI regulation. Given that many UK businesses are not solely focused on the UK market, it is likely they will be impacted by the AIA. Policymakers in the UK have been actively thinking about this ahead of the EU finalising the legislative process.

Essentially, yes. AI systems with minimal risk have no obligations, while low-risk or limited-risk systems are subject to transparency requirements in accordance with the EU AI Act.

Learn more

Discover how we can help your company

Schedule a call with one of our experts

Get a demo