Champions of the school of thought known as responsible AI dream of a future in which artificial intelligence is free from pre-existing societal biases – and there is now a plethora of resources available to help actualise this vision.
In this blog post, we explore how two of them, the Holistic AI and PyTorch libraries, can be used to implement bias mitigation strategies for artificial neural networks.
The Holistic AI Library is an open-source resource that provides tools for analysing and visualising bias in machine learning models. It provides various metrics to evaluate model performance and identify potential biases, such as statistical parity, equal opportunity, and equalised odds. PyTorch meanwhile is a popular machine learning library that provides tools for building and training neural network models.
In this example, we train a baseline model initially, without the application of mitigation strategies. After that, we retrain the model with a mitigation strategy, and the results are compared using bias metrics.
Our objective is to create an architecture that can mitigate biases present in training data. To achieve this, we will use PyTorch to build a multilayer perceptron (MLP) model. An MLP is a neural network consisting of multiple layers of interconnected nodes, which can learn complex relationships between input and output data.
For this implementation, we are using the German Credit Dataset. The German Credit Dataset is a well-known within machine learning and data analysis, which contains information about credit applications submitted to a German bank. The dataset is often used as a benchmark for evaluating different machine learning algorithms and models for credit risk assessment.
In this implementation we use the Disparate Impact Remover, a pre-processing bias mitigation strategy. This method modifies the values of some features in order to reduce bias, while preserving the rank order within each group.
As the use of machine learning continues to soar across a wide range of applications, it is critical to develop and implement effective bias mitigation strategies to ensure that these models are fair and free of prejudice. As evidenced in this blog, the Holistic AI and PyTorch libraries are an effective way to achieve this aim in artificial neural networks. We can illustrate this by creating a table to compare the results.
Once the MLP model is built, the Holistic AI library can be used to analyse its performances and identify any biases that may be present. The open-source resource provides a set of tools that can mitigate these biases, such as reweighting the training data, adjusting the decision threshold, or even modifying the model architecture with inprocessing strategies.
Combining the Holistic AI Library with PyTorch allows us to create a robust and unbiased machine learning model that can be used in various real-world applications, ensuring that decisions made by the model are fair and equitable for all individuals, regardless of their race, gender, or other personal attributes.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts