Six Methods to Mitigate Bias in Multiclassification Machine Learning Models

November 8, 2023
Authored by
Franklin Cardenoso Fernandez
Researcher at Holistic AI
Six Methods to Mitigate Bias in Multiclassification Machine Learning Models

In machine learning, classification models hold a crucial role in decision making. Not all decisions made by these models are binary. Some are category-based and involve various numerical labels.

This is where multiclassification models, which sort data into multiple classes, come into play. Multiclassification models are essential as they reflect the nuanced, multifaceted nature of real-world problems and decision-making processes.

However, as with other machine learning models, multiclassification models are not exempt from shortcomings. Bias in particular is a significant challenge as it can result in unfair and inaccurate predictions with material consequences.

In this blog, we will explore six different strategies for bias mitigation in multiclassification tasks, presenting their function and analysing techniques that enhance model fairness and accuracy.

Pre-processing

  1. Reweighing
  2. Disparate Impact Remover
  3. Correlation Remover

In-processing

  1. Fair Scoring

Post-processing

  1. LP Debiaser
  2. ML Debiaser

Bias in multiclassification tasks: Why do we need to mitigate it?

Before we discuss specific bias mitigators, it is essential to understand that bias in multiclassification tasks shares similarities with binary tasks but also comes with added complexities due to the increased number of target categories (from 2 to N).

As in binary cases, bias can also manifest in multiclass models, favouring certain groups while unfairly disadvantaging others. This can result in imbalanced and inequitable treatment of these groups, making it crucial to address and mitigate bias in these types of models.

Bias mitigator categories: pre-processing, in-processing, post-processing

The six bias mitigation strategies we will explore are divided into three categories: pre-processing, in-processing and post-processing.

The first category, pre-processing, involves modifying the dataset before feeding it into the models, with the goal of balancing outcomes and the class representation.

In-processing strategies, on the other hand, refer to adjusting the training process through the application of specialised algorithms that encourage models to make more equitable predictions.

Finally, post-processing strategies deal with trained models by calibrating model outputs to achieve fairness while attempting to preserve accuracy.

Pre-processing

  • Reweighing

This is a pre-processing method proposed by Kamiran and Calders that addresses the bias problem without altering the actual class labels. Unlike other methods that modify sample class labels, reweighing assigns weights to individual instances of the training dataset. These weights are carefully chosen to rectify any imbalance related to sensitive attributes. Then, by adjusting the weights assigned to the data, the training dataset can be more equitable, ensuring that no particular class or attribute is favoured or discriminated against during model training.

This strategy’s versatility makes it particularly interesting as the method can be easily integrated into different machine learning tasks, such as binary classification.

  • Disparate impact remover

The Disparate Impact Remover strategy is a method that constructs a new dataset by modifying unprotected groups in a way that eliminates disparate impact, ensuring that individuals with different attributes are treated equitably. While the protected attribute can be used as a reference to perform the modification, it is not mandatory in practice for it to be this attribute or even a single protected attribute, allowing for a more comprehensive approach to fairness. The strategy focuses on preserving the tank of the attributes to ensure the possibility of predicting class labels accurately.

  • Correlation Remover

This method addresses the potential correlations between sensitive and non-sensitive attributes trying to separate these correlations without changing the data meaning, this is, trying to preserve as much as the original data. This method allows the selection of the grade of projection by adjusting an alpha parameter which adds flexibility in controlling how much data to filter. In practical terms, this method involves centering the sensitive features, fitting a linear regression model to the remaining features and then, report the residuals by measuring the least-squares error.

In-processing

  • Fair scoring

The Fair Scoring System is a strategy designed for multiclassification tasks to develop interpretable scoring systems while considering both sparsity and fairness constraints.

This method bases its functioning in the Mixed-Integer Linear Programming (MILP) techniques by extending the Supersparse Linear Integer Models (SLIM) framework initially designed for binary classification and using the one-vs-all paradigm to learn an optimal scoring system. This means that one scoring system is generated for each label of the dataset, then, to classify a new sample, the resulting output of each system is leveraged and the outcome with the highest score is predicted. This gives the approach versatility and the ability to accommodate various operational constraints, not limited to fairness or sparsity.

Post-processing

  • LP Debiaser

This method expands the Linear Programming approach as applied in the binary classification case for the multiclass outcomes, focusing on enforcing fairness on a black-box classifier without altering its internal parameters.

To achieve this, the LP Debiaser strategy uses the predicted labels, the true labels and the protected attributes to construct a linear program to generate a new set of adjusted predictions by using the conditional probabilities of the adjusted predictor according to specific fairness criteria – for this case, “Equality of odds” or “Equality of opportunities”. To build this linear program, both the loss function and the fairness criteria must be linear with respect to the conditional probability matrices of protected attributes.

  • ML Debiaser

The ML Debiaser strategy addresses bias in multiclassification tasks by treating it as a regularised optimisation problem that is solved using the projected SGD method.

As opposed to the previous method, where “Equality of odds” or “Equality of opportunities” are presented as fairness constraints, this technique focuses on “Statistical parity” by post-processing predictions in the form of probability scores made by a classifier to control the fairness constraint concerning a sensitive attribute. This is done by producing thresholding rules with randomisation near the thresholds before a regularisation parameter is introduced to control the randomisation.

Moreover, rather than establishing deterministic rules, ML Debiaser opts for randomised prediction rules, given that achieving statistical parity by assigning a constant prediction in each subpopulation may be suboptimal.

Pros and cons of bias mitigation methods for multiclass tasks

The mitigation methods we have presented offer a comprehensive toolkit for improving the fairness and accuracy of multiclassification tasks, each one bringing its own advantages.

For example, pre-processing techniques such as Reweighing, Disparate Impact remover or Correlation Remover are particularly advantageous for allowing dataset modification before model training, ensuring that the model receives balanced data.

On the other hand, in-processing methods, such as the Fair Scoring strategy, provide the advantage of real-time adjustments during the training process. However, although this feature is valuable when accuracy and fairness constraints need to be optimised by the model, sometimes its training process can be slow, or even suboptimal.

Finally, post-processing techniques offer flexibility by allowing bias mitigation after the model training, facilitating the calibration of the model outcomes in the context of different fairness constraints without modifying the model architecture or the need for retraining.

Conclusion

In this blog, we have learned about different bias mitigation techniques used in multiclassification tasks. The combination of these techniques offers a variety of approaches to the bias problem.

Depending on the specific requirements and constraints of a particular context, one or more of these strategies can be effectively employed to enhance fairness in machine learning models.

For more details about these strategies, you can revise the original publications. Or, if you want to implement and test them with your own data, use the Holistic AI Library, an open-source resource that includes all the techniques we have presented here as well as different metrics that will help in your implementations.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call