Scrutinizing XAI using linear ground-truth data with suppressor variables. (arXiv:2111.07473v2 [stat.ML] UPDATED)
By: <a href="http://arxiv.org/find/stat/1/au:+Wilming_R/0/1/0/all/0/1">Rick Wilming</a>, <a href="http://arxiv.org/find/stat/1/au:+Budding_C/0/1/0/all/0/1">Céline Budding</a>, <a href="http://arxiv.org/find/stat/1/au:+Muller_K/0/1/0/all/0/1">Klaus-Robert Müller</a>, <a href="http://arxiv.org/find/stat/1/au:+Haufe_S/0/1/0/all/0/1">Stefan Haufe</a> Posted: June 23, 2023
Machine learning (ML) is increasingly often used to inform high-stakes
decisions. As complex ML models (e.g., deep neural networks) are often
considered black boxes, a wealth of procedures has been developed to shed light
on their inner workings and the ways in which their predictions come about,
defining the field of ‘explainable AI’ (XAI). Saliency methods rank input
features according to some measure of ‘importance’. Such methods are difficult
to validate since a formal definition of feature importance is, thus far,
lacking. It has been demonstrated that some saliency methods can highlight
features that have no statistical association with the prediction target
(suppressor variables). To avoid misinterpretations due to such behavior, we
propose the actual presence of such an association as a necessary condition and
objective preliminary definition for feature importance. We carefully crafted a
ground-truth dataset in which all statistical dependencies are well-defined and
linear, serving as a benchmark to study the problem of suppressor variables. We
evaluate common explanation methods including LRP, DTD, PatternNet,
PatternAttribution, LIME, Anchors, SHAP, and permutation-based methods with
respect to our objective definition. We show that most of these methods are
unable to distinguish important features from suppressors in this setting.
Provided by:
http://arxiv.org/icons/sfx.gif