Holistic AI Library Tutorial: Fairness Analysis for Binary Classification with Python

April 5, 2023
Authored by
Franklin Cardenoso Fernandez
Researcher at Holistic AI
Holistic AI Library Tutorial: Fairness Analysis for Binary Classification with Python

Machine learning models have become a part of our day-to-day life, increasingly being used to make predictions and decisions, classify images or text, recognise objects, and recommend products, for example.

While one of the main objectives of those developing machine learning models is maximising their accuracy or efficacy, it is also important to consider the limitations and challenges of these models, especially in relation to bias. Bias occurs when a model has different outcomes for different subgroups, something that can result from various factors at different stages of the model's development. Although mitigating bias is key to unlocking value responsibly and equitably, since the nature of bias in these systems is not simply technical, addressing the problem involves assessing the models with a variety of metrics which can be used to improve the results produced and reduce bias.

In this blog post, we use the COMPAS tool as an example of a biased system and illustrate how bias can be measured using Holistic AI’s open-source library.

Figure 1: Fairness Machine Learning
Figure 1: Fairness Machine Learning

The COMPAS case

Probably one of the most well-known cases of bias in an automated system is Northpointe’s Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool, which used a machine learning model as a decision support tool to score the likelihood of recidivism. To make predictions, the algorithm was fed different information about the individuals, including age, sex, and criminal history for instance. Based on this information, the algorithm assigned the defendants scores from 1 to 10, indicating how likely they are to re-offend. However, an investigation into the tool found that the model was biased against black people, assigning them a higher probability in contrast with white people.

Figure 2. Taken from ProPublica

This situation drew attention to the urgent need to improve the equity of predictions and decisions made by machine learning models due to the significant impact on individuals' lives these systems have the potential to influence. The case is considered a key motivator in the creation of tools to aid developers in identifying and addressing bias in the development of AI models. So far, a number of metrics have been proposed to measure the fairness of a model, for instance, statistical parity or disparate impact.

Bias measuring with Holistic AI library

One example of these assessment tools is the open-source library built by Holistic AI, created to assess and improve the trustworthiness of AI systems through a set of techniques to measure and mitigate bias intuitively and easily. Specific to this library is the compatibility and similar syntax to the well-known Sklearn library, thereby the user in most cases only needs to separate the protected group from the training dataset and then follow the traditional pipeline to fit the model and predict the outcomes, as we will see later.

For this example, we will use the holisticai library to address the bias problem in a pre-processed COMPAS dataset, which can be found here.

First, we simply need to install the library into our python environment using the following command:


pip install holisticai

Data exploration

This version of the COMPAS dataset can be loaded and explored from our working directory using the pandas package:


df = pd.read_csv('propublicaCompassRecividism_data_fairml.csv')
df.info()

As we can see above, this dataset is composed of 12 features, where the outcome is the 'Two_yr_Recidivism' column, which indicates whether a person commits a crime in the following two years or not. The remaining columns include information about the offender’s criminal record, ethnicity, and sex, for example Moreover, there are no missing values in the dataset, so we do not need to take any additional action for that.

For our purpose, to analyse bias in the model, in this example, we will select the 'Hispanic' column as our protected attribute, but feel free to select any column that you want to analyse. As can be seen below, values in the Hispanic column are 0 and 1, where 0 represents that the offender is not Hispanic and 1 represents that they are Hispanic.

We can use plots from the holisticai library to observe the proportions of the data and then perform a quick exploration.


from holisticai.bias.plots import group_pie_plot, frequency_plot

#select the Hispanic column from the dataset
p_attr = df['Hispanic']
#select the recidivism outcome from the dataset
y = df['Two_yr_Recidivism']
#create a plot to show the proportion of the offenders in the dataset that are hispanic
group_pie_plot(p_attr)

As we can see, Hispanic people (labelled as 1) only represent 8% of the complete group.


#plot the frequency of people in the Hispanic and non-Hispanic group that reoffend within 2 years
frequency_plot(p_attr, y)

Using the frequency_plot function, we can also observe that the zero (0) group (non-Hispanics) has a much higher pass rate than the 1 group (Hispanic). In other words, more non-Hispanics reoffended within 2 years than Hispanics.

Model training

We will begin training the model in a traditional way, without considering the influence of any protected attribute and then we will calculate some fairness metrics to assess the predictions of the model.


df_enc = df.copy()

from sklearn.model_selection import train_test_split
#create a dataframe with only the predictors
X = df_enc.drop(columns=['Two_yr_Recidivism'])
#create a dataframe with only the outcome variable
y = df_enc['Two_yr_Recidivism']

#split the data into training and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

For this example, we use a traditional logistic regression model.


from sklearn.linear_model import LogisticRegression
#create the regression model using training data
LR = LogisticRegression(random_state=42, max_iter=500)
LR.fit(X_train, y_train)
#then use it to predict outcomes using the test data
y_pred = LR.predict(X_test)

Now, we can calculate the metrics with the predicted outcomes:


from sklearn import metrics
#create the metrics dictionary
metrics_dict={

       "Accuracy": metrics.accuracy_score,
       "Balanced accuracy": metrics.balanced_accuracy_score,
       "Precision": metrics.precision_score,
       "Recall": metrics.recall_score,
       "F1-Score": metrics.f1_score}

#define a metrics dataframe function
def metrics_dataframe(y_pred, y_true, metrics_dict=metrics_dict):
   metric_list = [[pf, fn(y_true, y_pred)] for pf, fn in metrics_dict.items()]
   return pd.DataFrame(metric_list, columns=["Metric", "Value"]).set_index("Metric")

#print the metrics for the test data
metrics_dataframe(y_pred, y_test)

We obtained the values above, which are not bad but could be improved with further optimisation.

Bias measuring

Now we need to measure the bias presented in the model with respect to the protected attribute. To calculate the bias of the model, the holisticai library contains a range of useful metrics. To use these functions, we only need to separate the protected attribute from the data and use the predictions with the expected outcomes.


#isolate those who are not Hispanic
group_a = X_test["Hispanic"]==0
#isolate those who are hispanic
group_b = X_test["Hispanic"]==1
#isolate the actual outcomes and predicted outcomes based on the test data
y_pred  = LR.predict(X_test)
y_true  = y_test

The library includes an interesting function classification_bias_metrics that computes a range of relevant classification bias metrics such as Statistical parity, Disparate impact, and so on, and displays them as a chart where fair reference values are included for comparison. The function allows us to select the metrics we want to calculate by specifying equal_outcome, equal_opportunity or both. For this example, we will calculate all the metrics, and then pass both as the value in the metric_type parameter.


from holisticai.bias.metrics import classification_bias_metrics
classification_bias_metrics(group_a, group_b, y_pred, y_true, metric_type='both')

These metrics help us to determine whether the model is biased or not.  For example, for the statistical parity metric, values lower than -0.1 or higher than 0.1 indicate bias. For the disparate impact, values lower than 0.8 or higher than 1.2 indicate bias. As we can see, the library presents us with not only the calculated values for the fairness metrics but also the reference values which indicate an ideal debiased AI model. Therefore, the closer the values are to the reference, the fairer our model is.

Given the values from this table, we can clearly observe that the model is biased against Hispanics, who are predicted to re-offend at a higher rate than non-Hispanics. The remaining metrics also provide us with interesting information, with both the Four-Fifths rule, which is widely used in selection and indicates the presence of different outcomes for different subgroups and the Equality of Opportunity Difference, which indicates the difference between the true positive rates of privileged and unprivileged groups, both being violated and indicating bias. You can find more details of the metrics in the reference of the library.

Summary

Through this tutorial, we explored the holisticai library, which allows us to measure the bias present in AI models, in the example we used the function classification_bias_metrics to do so, but this library possesses different functions to measure the bias not only for binary classification but also for other types that you can test, you can find them in the reference of the library including extra examples

If you want to follow this tutorial for yourself, you can do so here.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call