×
Policy Hour
panel & networking
Online Webinar

Managing Risks with AI Governance

Thursday, 25 April 2024 | 4 pm BST/ 5 pm CEST/ 11 am EDT
Register for this event
Register for this event

In our April Policy Hour, Siddhant Chatterjee, Public Policy Strategist at Holistic AI, and Rashad Abelson, Technology Sector Lead at the OECD Centre for Responsible Business Conduct, discussed AI risks and mitigation strategies. They covered global AI incidents, their implications, and how emerging legislation like the EU AI Act addresses them. Connecting these ideas to OECD AI principles, they outlined risk management techniques, standards, and audits for organizations. Additionally, they provided recommendations for effective and scalable risk management.

Below we have included the full recording of the event, as well as the slides and Q&A:

Q&A


The OECD views risk as the likelihood and significance of harm caused by AI systems. Incidents are events where AI systems directly or indirectly cause harm. Harms encompass violations of human rights, environmental impacts, and other negative consequences. These definitions are aligned with international frameworks and initiatives like the EU AI Act and NIST AI RMF.
Examples include the impact of AI algorithms on social media, AI-powered surveillance tools, invasive workplace monitoring, and AI in public decision-making. These incidents erode public trust in governments, businesses, and media.
The OECD encourages alignment across jurisdictions to facilitate interoperability and convergence of standards. While approaches may vary, the goal is to support companies in complying with multiple standards.
The OECD provides tools and guidance to help companies comply with regulations and mitigate risks, including the OECD AI Incident Monitor, which serves as a repository of relevant resources.
International standards emphasize companies' responsibility for adverse impacts and prioritize risk mitigation and transparency reporting to ensure accountability.
Challenges include the rapid pace of product development, a lack of understanding of responsible AI practices at an international level, and uncertainties regarding what constitutes "responsible AI."
Organizations should try to adhere to internationally accepted principles, engage stakeholders, and prioritize transparency in developing responsible AI policies.

In our April Policy Hour, Siddhant Chatterjee, Public Policy Strategist at Holistic AI, and Rashad Abelson, Technology Sector Lead at the OECD Centre for Responsible Business Conduct, discussed AI risks and mitigation strategies. They covered global AI incidents, their implications, and how emerging legislation like the EU AI Act addresses them. Connecting these ideas to OECD AI principles, they outlined risk management techniques, standards, and audits for organizations. Additionally, they provided recommendations for effective and scalable risk management.

Below we have included the full recording of the event, as well as the slides and Q&A:

Q&A


The OECD views risk as the likelihood and significance of harm caused by AI systems. Incidents are events where AI systems directly or indirectly cause harm. Harms encompass violations of human rights, environmental impacts, and other negative consequences. These definitions are aligned with international frameworks and initiatives like the EU AI Act and NIST AI RMF.
Examples include the impact of AI algorithms on social media, AI-powered surveillance tools, invasive workplace monitoring, and AI in public decision-making. These incidents erode public trust in governments, businesses, and media.
The OECD encourages alignment across jurisdictions to facilitate interoperability and convergence of standards. While approaches may vary, the goal is to support companies in complying with multiple standards.
The OECD provides tools and guidance to help companies comply with regulations and mitigate risks, including the OECD AI Incident Monitor, which serves as a repository of relevant resources.
International standards emphasize companies' responsibility for adverse impacts and prioritize risk mitigation and transparency reporting to ensure accountability.
Challenges include the rapid pace of product development, a lack of understanding of responsible AI practices at an international level, and uncertainties regarding what constitutes "responsible AI."
Organizations should try to adhere to internationally accepted principles, engage stakeholders, and prioritize transparency in developing responsible AI policies.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call