Insight
We’re Hiring!
Join our team

Three-Part Event Series in Partnership with UCL Centre for Digital Innovation Powered by AWS Taking Place in November

November 16, 2022
Authored by
Kleyton da Costa
Researcher at Holistic AI
Kleyton da Costa
Researcher at Holistic AI
Three-Part Event Series in Partnership with UCL Centre for Digital Innovation Powered by AWS Taking Place in November

November sees the kick-off of our three-part event series in partnership with UCL Centre for Digital Innovation (CDI) powered by AWS. In this series of free live events throughout the month, our experts will explore topics on AI transparency, AI fairness and bias, and how to manage the privacy risks of AI systems.

16th November: AI Transparency - Ensuring Accountability and Responsibility

Developing and deploying transparent AI is critical for ensuring fair, safe, and ethical algorithmic decision-making.

This session will explore how to build explainable AI systems and how to meaningfully interpret and communicate AI decision-making to relevant audiences.

24th November: AI Fairness and Bias

The adoption of AI systems is proliferating, and through its use, AI is transforming all sectors of business and society. While we continue to automate an ever-increasing array of processes, it is vital to be mindful of the risks associated and ensure our systems are designed to safeguard public interest.

This session will explore how fairness and bias risks of AI can be monitored, managed and minimised. We will discuss sector-specific best practices along with engineering techniques for assessing, measuring, and mitigating bias.

30th November: How to Manage the Privacy Risks of AI Systems

Organisations use AI and machine learning to inform decisions and automate critical processes. From helping organisations save time and money by automating mundane tasks to providing increased accuracy in medical diagnosis, AI has substantial benefits to our society. However, large amounts of sensitive data are processed to accomplish these tasks. And, while more data usually means better performing AI, it could also present privacy risks.

This session will explore engineering techniques that can mitigate the privacy risks of AI systems, promote data minimisation, and ensure AI is privacy-preserving by design. We will discuss the trade-offs between safeguarding privacy and implementing broader responsible AI principles and provide sector specific best practices.

About Holistic AI

Holistic AI is an AI risk management company that aims to empower enterprises to adopt and scale AI confidently. The AI risk management software platform audits and assures AI systems' code, data, policies and processes. As a result, enterprises can maximise their AI's value, minimise reputational, legal and commercial risks, and accelerate innovation.

We have pioneered the field of AI risk management and have deep practical experience auditing AI systems, having reviewed over 100+ enterprise AI projects covering 20k+ different algorithms. Our clients and partners include Fortune 500 corporations, SMEs, governments and regulators.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.
deployment options

HAI Platform has two deployment options, Hybrid and Fully Managed

Hybrid Cloud

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Hybrid Cloud

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Hybrid Cloud

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Hybrid Cloud

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →
product

AI Governance

A command centre suite for executive-level management of AI applications

Register AI usage and development

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Register AI usage and development

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Register AI usage and development

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Register AI usage and development

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →
Holistic AI x UCL CDI

Three-Part Event Series in Partnership with UCL Centre for Digital Innovation Powered by AWS Taking Place in November

November 16, 2022

November sees the kick-off of our three-part event series in partnership with UCL Centre for Digital Innovation (CDI) powered by AWS. In this series of free live events throughout the month, our experts will explore topics on AI transparency, AI fairness and bias, and how to manage the privacy risks of AI systems.

16th November: AI Transparency - Ensuring Accountability and Responsibility

Developing and deploying transparent AI is critical for ensuring fair, safe, and ethical algorithmic decision-making.

This session will explore how to build explainable AI systems and how to meaningfully interpret and communicate AI decision-making to relevant audiences.

24th November: AI Fairness and Bias

The adoption of AI systems is proliferating, and through its use, AI is transforming all sectors of business and society. While we continue to automate an ever-increasing array of processes, it is vital to be mindful of the risks associated and ensure our systems are designed to safeguard public interest.

This session will explore how fairness and bias risks of AI can be monitored, managed and minimised. We will discuss sector-specific best practices along with engineering techniques for assessing, measuring, and mitigating bias.

30th November: How to Manage the Privacy Risks of AI Systems

Organisations use AI and machine learning to inform decisions and automate critical processes. From helping organisations save time and money by automating mundane tasks to providing increased accuracy in medical diagnosis, AI has substantial benefits to our society. However, large amounts of sensitive data are processed to accomplish these tasks. And, while more data usually means better performing AI, it could also present privacy risks.

This session will explore engineering techniques that can mitigate the privacy risks of AI systems, promote data minimisation, and ensure AI is privacy-preserving by design. We will discuss the trade-offs between safeguarding privacy and implementing broader responsible AI principles and provide sector specific best practices.

About Holistic AI

Holistic AI is an AI risk management company that aims to empower enterprises to adopt and scale AI confidently. The AI risk management software platform audits and assures AI systems' code, data, policies and processes. As a result, enterprises can maximise their AI's value, minimise reputational, legal and commercial risks, and accelerate innovation.

We have pioneered the field of AI risk management and have deep practical experience auditing AI systems, having reviewed over 100+ enterprise AI projects covering 20k+ different algorithms. Our clients and partners include Fortune 500 corporations, SMEs, governments and regulators.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Manage risks. Embrace AI.

Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AI

Get Started