Explaining Machine Learning Outputs: The Role of Feature Importance

Authored by
Kleyton da Costa
Machine Learning Researcher at Holistic AI
Published on
Aug 4, 2023
read time
0
min read
share this
Explaining Machine Learning Outputs: The Role of Feature Importance

In an age where artificial intelligence permeates nearly every aspect of our lives, the inner workings of these intelligent systems often remain shrouded in mystery. However, with the rise of explainable AI (XAI), a groundbreaking paradigm is transforming the AI landscape, bringing transparency and understanding to complex machine learning models. Gone are the days of accepting AI decisions as enigmatic black-box outputs; instead, we are now entering an era where we can uncover the underlying rationale behind AI predictions.

In this post, let's briefly introduce two strategies for global feature importance – permutation feature importance and surrogacy feature importance. Additionally, we'll start with some important definitions to understand and categorise the topics that make up the field of explainable AI.

How data scientists define explainability in machine learning models

The table below provides a summary of the main concepts related to the field of explainable AI, an important feature in the AI landscape.

Explainability concepts and definitions
Intrinsic or post-hoc? Intrinsic A model which is intrinsically interpretable
Post-hoc This classification treats methods treat models as black boxes; agnostic to model architecture; extract relationships between features and model predictions; applied after training
Model-specific or model-agnostic? Model-specific Limited to specific model classes; intrinsically interpretable model techniques are model specific; designed for model architectures;
Model-agnostic Applied to any model after it is trained; don't have access to the internals of the model; work by analysing feature input and output pairs;
Local or global? Local Interpretation method explains an individual prediction; feature attribution is the identification of relevant features as an explanation for a model
Global Interpretation method explains entire model behaviour; feature attribution summary for the entire data set

Permutation feature importance in ML models

Permutation feature importance is a valuable tool in the realm of machine learning explainability. Unlike model-specific methods, it is model-agnostic, meaning it can be applied to various types of predictive algorithms, such as linear regression, random forests, support vector machines, and neural networks. This universality makes it particularly useful when dealing with a diverse range of models and understanding their inner workings.

The process of calculating permutation importance involves systematically shuffling the values of a single feature while keeping the other set of features unchanged. By doing so and reevaluating the model's performance, we can observe how the shuffling impacts the predictive accuracy or performance metric of the model. A feature that significantly affects the model's performance will demonstrate a considerable drop in predictive accuracy when its values are permuted, highlighting its importance.

Permutation importance offers several advantages. Firstly, it provides a quantitative measure of feature importance, allowing data scientists to rank features based on their influence on the model's predictions. This ranking can be crucial for feature selection, feature engineering, and understanding which variables contribute the most to the model's decision-making process.

Secondly, it aids in identifying potential issues such as data leakage or multicollinearity. If a feature exhibits high permutation importance, it suggests that the model heavily relies on that feature for making predictions. Consequently, this feature might be correlated with the target variable, or it might be a direct source of data leakage, leading to an overly optimistic evaluation of the model's performance.

Surrogacy feature importance

Moving on to the Surrogacy Efficacy Score, this technique is designed specifically to gain insights into complex "black box" models, which are often challenging to interpret. Such models might include deep neural networks or ensemble models, which are powerful but lack transparency in their decision-making process.

To address this lack of transparency, the Surrogacy Efficacy Score relies on creating interpretable surrogate models. It starts by training a more interpretable model, such as a decision tree, to approximate the behaviour of the complex black-box model. This surrogate model is constructed by partitioning the input data based on the values of specific features and creating simple rules to mimic the original model's predictions.

The training process for the surrogate model involves minimising the loss between the predictions of the black-box model and the surrogate model. By achieving a close resemblance between the two models' predictions, the surrogate model effectively acts as an interpretable proxy for the black-box model. This surrogate can then be analysed and inspected to understand how the complex model makes decisions based on different feature values.

The Surrogacy Efficacy Score is particularly useful in scenarios where model transparency is critical, such as in regulatory compliance, healthcare, finance, and other domains where interpretability and accountability are necessary. By providing a more understandable representation of the complex model's behavior, the technique enables stakeholders to trust the predictions and make informed decisions based on the model's output.

Making machine learning models transparent and explainable: Guide to feature importance

In conclusion, transparency and explainability are becoming increasingly crucial in the deployment of AI and ML models. As we rely more on these models to drive critical decisions in real-world applications, understanding their inner workings and being able to explain their predictions is vital for building trust and ensuring accountability.

The strategies of permutation feature importance and surrogate feature importance offer effective ways to shed light on the "black-box" nature of models, empowering us towards informed and responsible AI use. By adopting these techniques, we develop a culture of transparent and trustworthy AI systems that can be confidently embraced and integrated into various aspects of our lives.

Holistic AI – helping organisations embrace explainable AI

As researchers and practitioners continue to develop and refine these explainable AI methods, we can look forward to a future where AI becomes an indispensable tool, contributing positively to society while maintaining a high standard of transparency and interpretability.

At Holistic AI, our mission is to help companies validate their machine learning-based systems, allowing them to jump logistic hurdles and enable the safe, transparent, reliable use of AI. Schedule a call to see find out how we can help your organisation.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Take command of your AI ecosystem

Learn more
Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo