×
Policy Hour

Managing Risks with AI Governance

Thursday, 25 April 2024 | 4 pm BST/ 5 pm CEST/ 11 am EDT
share this
Register for this event

In our April Policy Hour, Siddhant Chatterjee, Public Policy Strategist at Holistic AI, and Rashad Abelson, Technology Sector Lead at the OECD Centre for Responsible Business Conduct, discussed AI risks and mitigation strategies. They covered global AI incidents, their implications, and how emerging legislation like the EU AI Act addresses them. Connecting these ideas to OECD AI principles, they outlined risk management techniques, standards, and audits for organizations. Additionally, they provided recommendations for effective and scalable risk management.

Below we have included the full recording of the event, as well as the slides and Q&A:

Q&A


The OECD views risk as the likelihood and significance of harm caused by AI systems. Incidents are events where AI systems directly or indirectly cause harm. Harms encompass violations of human rights, environmental impacts, and other negative consequences. These definitions are aligned with international frameworks and initiatives like the EU AI Act and NIST AI RMF.
Examples include the impact of AI algorithms on social media, AI-powered surveillance tools, invasive workplace monitoring, and AI in public decision-making. These incidents erode public trust in governments, businesses, and media.
The OECD encourages alignment across jurisdictions to facilitate interoperability and convergence of standards. While approaches may vary, the goal is to support companies in complying with multiple standards.
The OECD provides tools and guidance to help companies comply with regulations and mitigate risks, including the OECD AI Incident Monitor, which serves as a repository of relevant resources.
International standards emphasize companies' responsibility for adverse impacts and prioritize risk mitigation and transparency reporting to ensure accountability.
Challenges include the rapid pace of product development, a lack of understanding of responsible AI practices at an international level, and uncertainties regarding what constitutes "responsible AI."
Organizations should try to adhere to internationally accepted principles, engage stakeholders, and prioritize transparency in developing responsible AI policies.

Our Speakers

Rashad Abelson (Technology Sector Lead, OECD Centre for Responsible Business Conduct)

Siddhant Chatterjee, Policy and Governance Strategist, Holistic AI

Agenda

Hosted by

No items found.

In our April Policy Hour, Siddhant Chatterjee, Public Policy Strategist at Holistic AI, and Rashad Abelson, Technology Sector Lead at the OECD Centre for Responsible Business Conduct, discussed AI risks and mitigation strategies. They covered global AI incidents, their implications, and how emerging legislation like the EU AI Act addresses them. Connecting these ideas to OECD AI principles, they outlined risk management techniques, standards, and audits for organizations. Additionally, they provided recommendations for effective and scalable risk management.

Below we have included the full recording of the event, as well as the slides and Q&A:

Q&A


The OECD views risk as the likelihood and significance of harm caused by AI systems. Incidents are events where AI systems directly or indirectly cause harm. Harms encompass violations of human rights, environmental impacts, and other negative consequences. These definitions are aligned with international frameworks and initiatives like the EU AI Act and NIST AI RMF.
Examples include the impact of AI algorithms on social media, AI-powered surveillance tools, invasive workplace monitoring, and AI in public decision-making. These incidents erode public trust in governments, businesses, and media.
The OECD encourages alignment across jurisdictions to facilitate interoperability and convergence of standards. While approaches may vary, the goal is to support companies in complying with multiple standards.
The OECD provides tools and guidance to help companies comply with regulations and mitigate risks, including the OECD AI Incident Monitor, which serves as a repository of relevant resources.
International standards emphasize companies' responsibility for adverse impacts and prioritize risk mitigation and transparency reporting to ensure accountability.
Challenges include the rapid pace of product development, a lack of understanding of responsible AI practices at an international level, and uncertainties regarding what constitutes "responsible AI."
Organizations should try to adhere to internationally accepted principles, engage stakeholders, and prioritize transparency in developing responsible AI policies.

Speaker

Rashad Abelson (Technology Sector Lead, OECD Centre for Responsible Business Conduct)

Rashad Abelson (Technology Sector Lead, OECD Centre for Responsible Business Conduct)

Rashad Abelson is the Technology Sector Lead in the OECD Centre for Responsible Business Conduct, where he is part of the larger team working on development and implementation of OECD standards on due diligence. His work focuses on policy advice for governments seeking to integrate OECD standards into domestic legislation and foreign policy, monitoring of government implementation of OECD legal instruments, and research and tools development to support company-level implementation of OECD RBC standards in the development and use of technology.

Siddhant Chatterjee, Policy and Governance Strategist, Holistic AI

Siddhant Chatterjee, Policy and Governance Strategist, Holistic AI

Siddhant Chatterjee is a Policy & Governance Strategist with Holistic AI, where he helps integrate policy and regulatory requirements into its proprietary AI Governance products. He is also a member of the OECD's Network of Experts on AI (ONE.AI) as well as a member of the British Standards Institution's (BSI) ART/1 Committee on Artificial Intelligence. A Master's in Technology Policy from UCL, he has worked with TikTok as one of their first policy analysts in South Asia, the Australian Government as an Advisor on AI Ethics and Disinformation, and the Centre for Data Ethics and Innovation (CDEI) on the algorithmic ethics of climate technologies.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo