Join the Great Agent Hack 2025 - Win big from £50,000 in cash & credits (£30,000 secured)!
Register Now
Learn more about EU AI Act

Evaluating Explainability for Machine Learning Predictions using Model-Agnostic Metrics

In our recent peer-reviewed paper Holistic AI researchers and additional contributors highlight the importance of explainability in AI, defining explainability in terms of interoperability. After a discussion of existing literature, we introduce a set of computational model-agnostic metrics to support explainability in AI. We then apply these metrics in a series of experiments. Metrics include:

  1. Feature Importance-based Metrics: Evaluate the distribution and stability of feature importance across the model, using concepts like entropy and divergence to measure the spread and concentration of feature importance.
  2. Partial Dependence Curve-based Metric: Assesses the simplicity of the model's response as a function of individual features, using the second derivative of the partial dependence curve to measure non-linearity.
  3. Surrogacy Model-based Metric: Evaluates the efficacy of simple, interpretable models (surrogates) in approximating the predictions of more complex models.
Download our latest
Academic Paper
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this

Unlock the Future with AI Governance.

Get a demo

Get a demo