The Holistic AI library is an open-source tool to assess and improve the trustworthiness of AI systems. The current version of the library offers a set of techniques to easily measure and mitigate Bias across a variety of tasks.
The long-term goal of the library is to offer a set of techniques to easily measure and mitigate AI risks across five areas: Bias, Efficacy, Robustness, Privacy and Explainability. This will allow a holistic assessment of AI systems.
Our mission at Holistic AI is to reduce risks connected to AI and data projects.
We introduce here the risk mitigation roadmaps, a set of guides that will help you mitigate some of the most common AI risks.