Evaluating Explainability for Machine Learning Predictions using Model-Agnostic Metrics

In our recent peer-reviewed paper Holistic AI researchers and additional contributors highlight the importance of explainability in AI, defining explainability in terms of interoperability. After a discussion of existing literature, we introduce a set of computational model-agnostic metrics to support explainability in AI. We then apply these metrics in a series of experiments. Metrics include:

  1. Feature Importance-based Metrics: Evaluate the distribution and stability of feature importance across the model, using concepts like entropy and divergence to measure the spread and concentration of feature importance.
  2. Partial Dependence Curve-based Metric: Assesses the simplicity of the model's response as a function of individual features, using the second derivative of the partial dependence curve to measure non-linearity.
  3. Surrogacy Model-based Metric: Evaluates the efficacy of simple, interpretable models (surrogates) in approximating the predictions of more complex models.
Download our latest
Academic Paper
Academic Paper
https://cdn.prod.website-files.com/6305e5d52c28356b4fe71bac/65bbd50e1ee6f0f2aed9856e_Holistic-AI-Evaluating-Explainability-for-Machine-Learning-Predictions.pdf
Evaluating Explainability for Machine Learning Predictions using Model-Agnostic Metrics
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this

Unlock the Future with AI Governance.

Get a demo

Get a demo

Academic Paper
https://cdn.prod.website-files.com/6305e5d52c28356b4fe71bac/65bbd50e1ee6f0f2aed9856e_Holistic-AI-Evaluating-Explainability-for-Machine-Learning-Predictions.pdf
Evaluating Explainability for Machine Learning Predictions using Model-Agnostic Metrics