Developing and deploying transparent AI is critical for ensuring fair, safe and ethical algorithmic decision-making.
This session will explore how to build AI systems which are explainable and how to meaningfully interpret and communicate AI decision-making to relevant audiences. Without understanding how and why an algorithm makes decisions, automated decisions cannot be reviewed, challenged, or redressed. This is crucial in contexts where AI makes decisions or recommendations which affect human lives (e.g., education, healthcare, employment, financial services and insurance). Without transparency, AI systems cannot be held accountable.
Transparency also matters in commercial contexts. Given that many enterprises procure AI systems from technology vendors, it is crucial that they understand what they are buying, how well it works and how it should be used. End-users and citizens should also be informed that they are interacting with or being assessed by AI (e.g., with chatbots or in video interviews).
- How to manage the transparency risks of AI systems
- How to meaningfully communicate AI decision-making to different audiences
- Information which should be sought when procuring AI
- The transparency requirements of AI regulations (e.g., EU AI Act)
- Chair: Graça Carvalho, Director - UCL Centre for Digital Innovation
- Dr Emre Kazim, Co-Founder and COO - Holistic AI
- Charles Kerrigan, Partner (Banking & International Finance) - CMS
- Additional speakers TBC
To register interest in viewing the webinar, please sign up here.