Three-Part Event Series in Partnership with UCL Centre for Digital Innovation Powered by AWS Taking Place in November

November 16, 2022
Authored by
No items found.
Three-Part Event Series in Partnership with UCL Centre for Digital Innovation Powered by AWS Taking Place in November

November sees the kick-off of our three-part event series in partnership with UCL Centre for Digital Innovation (CDI) powered by AWS. In this series of free live events throughout the month, our experts will explore topics on AI transparency, AI fairness and bias, and how to manage the privacy risks of AI systems.

16th November: AI Transparency - Ensuring Accountability and Responsibility

Developing and deploying transparent AI is critical for ensuring fair, safe, and ethical algorithmic decision-making.

This session will explore how to build explainable AI systems and how to meaningfully interpret and communicate AI decision-making to relevant audiences.

24th November: AI Fairness and Bias

The adoption of AI systems is proliferating, and through its use, AI is transforming all sectors of business and society. While we continue to automate an ever-increasing array of processes, it is vital to be mindful of the risks associated and ensure our systems are designed to safeguard public interest.

This session will explore how fairness and bias risks of AI can be monitored, managed and minimised. We will discuss sector-specific best practices along with engineering techniques for assessing, measuring, and mitigating bias.

30th November: How to Manage the Privacy Risks of AI Systems

Organisations use AI and machine learning to inform decisions and automate critical processes. From helping organisations save time and money by automating mundane tasks to providing increased accuracy in medical diagnosis, AI has substantial benefits to our society. However, large amounts of sensitive data are processed to accomplish these tasks. And, while more data usually means better performing AI, it could also present privacy risks.

This session will explore engineering techniques that can mitigate the privacy risks of AI systems, promote data minimisation, and ensure AI is privacy-preserving by design. We will discuss the trade-offs between safeguarding privacy and implementing broader responsible AI principles and provide sector specific best practices.

About Holistic AI

Holistic AI is an AI risk management company that aims to empower enterprises to adopt and scale AI confidently. The AI risk management software platform audits and assures AI systems' code, data, policies and processes. As a result, enterprises can maximise their AI's value, minimise reputational, legal and commercial risks, and accelerate innovation.

We have pioneered the field of AI risk management and have deep practical experience auditing AI systems, having reviewed over 100+ enterprise AI projects covering 20k+ different algorithms. Our clients and partners include Fortune 500 corporations, SMEs, governments and regulators.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call