XAI-TRIS: Non-linear benchmarks to quantify ML explanation performance. (arXiv:2306.12816v1 [cs.LG])


XAI-TRIS: Non-linear benchmarks to quantify ML explanation performance. (arXiv:2306.12816v1 [cs.LG])
By: <a href="http://arxiv.org/find/cs/1/au:+Clark_B/0/1/0/all/0/1">Benedict Clark</a>, <a href="http://arxiv.org/find/cs/1/au:+Wilming_R/0/1/0/all/0/1">Rick Wilming</a>, <a href="http://arxiv.org/find/cs/1/au:+Haufe_S/0/1/0/all/0/1">Stefan Haufe</a> Posted: June 23, 2023

The field of ‘explainable’ artificial intelligence (XAI) has produced highly
cited methods that seek to make the decisions of complex machine learning (ML)
methods ‘understandable’ to humans, for example by attributing ‘importance’
scores to input features. Yet, a lack of formal underpinning leaves it unclear
as to what conclusions can safely be drawn from the results of a given XAI
method and has also so far hindered the theoretical verification and empirical
validation of XAI methods. This means that challenging non-linear problems,
typically solved by deep neural networks, presently lack appropriate remedies.
Here, we craft benchmark datasets for three different non-linear classification
scenarios, in which the important class-conditional features are known by
design, serving as ground truth explanations. Using novel quantitative metrics,
we benchmark the explanation performance of a wide set of XAI methods across
three deep learning model architectures. We show that popular XAI methods are
often unable to significantly outperform random performance baselines and edge
detection methods. Moreover, we demonstrate that explanations derived from
different model architectures can be vastly different; thus, prone to
misinterpretation even under controlled conditions.

Provided by:
http://arxiv.org/icons/sfx.gif

DoctorMorDi

DoctorMorDi

Moderator and Editor