×
Policy Hour
panel & networking
Online Webinar

Global AI Governance

Wednesday, 19 June 2024 | 2pm CEST/ 1pm BST/ 8am ET
Register for this event
Register for this event

For our June Policy Hour, Luis Aranda, Artificial Intelligence Policy Analyst and Economist at the OECD, joined Holistic AI’s Legal and Regulatory Lead Dr Osman Gazi Güçlütürk to discuss Global AI Governance.

The discussion covered key elements of AI Governance and related trends, including the role of principles, voluntary frameworks, legislation, and standards. Divergence in the definition of an AI system was also discussed, with the OECD playing a key role in this debate and recently updating its definition to be more in line with the definition in the EU AI Act, creating some much-needed cohesion.

Finally, the discussion moved onto AI incidents, or examples of where AI has gone wrong and resulted in actual or potential harm. These incidents are being tracked by the OECD through their AI Incidents Monitor and by Holistic AI through our global AI Tracker, which also tracks AI legislation, regulation, standards, legal action, and penalties and fines.

Missed the webinar or want to re-watch? Check out the recording at the top of the page and download the slides below.

Q&A


The OECD AI principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. The principles were initially adopted in 2019 and were recently updated in May 2024.

These principles outline a series of values that AI actors should uphold when developing and deploying AI, including prioritising transparency and explainability, accountability, and fairness. The principles also provide recommendations to policymakers on how to create effective AI policies.

OECD AI Principles

Although voluntary, these principles are adhered to by 47 countries and counting around the world.

An AI system is defined in different ways by different entities and is even defined differently across different laws. However, these definitions tend to cover four key elements: varied outputs, the role of humans, extent of automation, and the technology underlying the system.

The OECD’s principles previously defined AI as

“A machine-based system that is capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives. It uses machine and/or human-based data and inputs to (i) perceive real and/or virtual environments; (ii) abstract these perceptions into models through analysis in an automated manner (e.g., with machine learning), or manually; and (iii) use model inference to formulate options for outcomes. AI systems are designed to operate with varying levels of autonomy.”

However, this was updated in November 2023 to:

“a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

This change was made after lengthy discussions to inform the EU AI Act and create more cohesion, where an AI system is defined by the EU AI Act as:

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”

According to the OECD’s incident monitor, an AI incident occurs when an AI system directly or indirectly causes harm such as injury, disruption to critical infrastructure, violations of human rights, or harm to property.

On the other hand, a hazard is an event where an AI system could lead to one of these harms, but that harm has not yet been realised.

AI standard outlines a set of requirements and specifications that ensure the quality and reliability of a product. AI standards cover aspects such as AI management systems, risk management, data quality, and documentation and are developed by both international standards organizations such as the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) and national bodies including the British Standards Institution (BSI), Germany’s VDE, the American National Standards Institute (ANSI), the National Institute of Standards and Technology (NIST), and Europe’s CEN/CENELEC.

Compliance with AI standards helps to ensure that effective governance practices are in place to prevent incidents and hazards as well as increase trust. Particularly under the EU AI Act, compliance with harmonised standards to be issued by CEN/CENELEC will create a presumption of compliance with some of the requirements of the Act.

AI Governance is expected to significantly mature over the next 5-10 years, which will be supported by the implementation of significant AI laws, such as the EU AI Act. Expected to become the global gold standard for AI regulation due to its extraterritorial scope and comprehensiveness, this may help to support greater convergence across the regulatory landscape, including using a risk-based approach.

However, envisions of a single global regulatory framework may be unrealistic despite the expected convergence. Indeed, even the EU AI Act will be enforced by member state authorities, rather than by a single EU institution, leaving room for some limited divergence. On the other hand, this also allows the wider legal context of each country to be considered, and enforcement made more effective.

Nevertheless, it will become even more important that guidelines and laws evolve with forward-looking mechanisms so that they do not become outdated as technology advances.

Our Speakers

Luis Aranda

Dr Osman Gazi Gucluturk

Legal and Regulatory Lead, Holistic AI

Agenda

Hosted by

No items found.

For our June Policy Hour, Luis Aranda, Artificial Intelligence Policy Analyst and Economist at the OECD, joined Holistic AI’s Legal and Regulatory Lead Dr Osman Gazi Güçlütürk to discuss Global AI Governance.

The discussion covered key elements of AI Governance and related trends, including the role of principles, voluntary frameworks, legislation, and standards. Divergence in the definition of an AI system was also discussed, with the OECD playing a key role in this debate and recently updating its definition to be more in line with the definition in the EU AI Act, creating some much-needed cohesion.

Finally, the discussion moved onto AI incidents, or examples of where AI has gone wrong and resulted in actual or potential harm. These incidents are being tracked by the OECD through their AI Incidents Monitor and by Holistic AI through our global AI Tracker, which also tracks AI legislation, regulation, standards, legal action, and penalties and fines.

Missed the webinar or want to re-watch? Check out the recording at the top of the page and download the slides below.

Q&A


The OECD AI principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. The principles were initially adopted in 2019 and were recently updated in May 2024.

These principles outline a series of values that AI actors should uphold when developing and deploying AI, including prioritising transparency and explainability, accountability, and fairness. The principles also provide recommendations to policymakers on how to create effective AI policies.

OECD AI Principles

Although voluntary, these principles are adhered to by 47 countries and counting around the world.

An AI system is defined in different ways by different entities and is even defined differently across different laws. However, these definitions tend to cover four key elements: varied outputs, the role of humans, extent of automation, and the technology underlying the system.

The OECD’s principles previously defined AI as

“A machine-based system that is capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives. It uses machine and/or human-based data and inputs to (i) perceive real and/or virtual environments; (ii) abstract these perceptions into models through analysis in an automated manner (e.g., with machine learning), or manually; and (iii) use model inference to formulate options for outcomes. AI systems are designed to operate with varying levels of autonomy.”

However, this was updated in November 2023 to:

“a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

This change was made after lengthy discussions to inform the EU AI Act and create more cohesion, where an AI system is defined by the EU AI Act as:

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”

According to the OECD’s incident monitor, an AI incident occurs when an AI system directly or indirectly causes harm such as injury, disruption to critical infrastructure, violations of human rights, or harm to property.

On the other hand, a hazard is an event where an AI system could lead to one of these harms, but that harm has not yet been realised.

AI standard outlines a set of requirements and specifications that ensure the quality and reliability of a product. AI standards cover aspects such as AI management systems, risk management, data quality, and documentation and are developed by both international standards organizations such as the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) and national bodies including the British Standards Institution (BSI), Germany’s VDE, the American National Standards Institute (ANSI), the National Institute of Standards and Technology (NIST), and Europe’s CEN/CENELEC.

Compliance with AI standards helps to ensure that effective governance practices are in place to prevent incidents and hazards as well as increase trust. Particularly under the EU AI Act, compliance with harmonised standards to be issued by CEN/CENELEC will create a presumption of compliance with some of the requirements of the Act.

AI Governance is expected to significantly mature over the next 5-10 years, which will be supported by the implementation of significant AI laws, such as the EU AI Act. Expected to become the global gold standard for AI regulation due to its extraterritorial scope and comprehensiveness, this may help to support greater convergence across the regulatory landscape, including using a risk-based approach.

However, envisions of a single global regulatory framework may be unrealistic despite the expected convergence. Indeed, even the EU AI Act will be enforced by member state authorities, rather than by a single EU institution, leaving room for some limited divergence. On the other hand, this also allows the wider legal context of each country to be considered, and enforcement made more effective.

Nevertheless, it will become even more important that guidelines and laws evolve with forward-looking mechanisms so that they do not become outdated as technology advances.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call