Investigating Poor Performance Regions of Black Boxes: LIME-based Exploration in Sepsis Detection. (arXiv:2306.12507v1 [cs.LG])


Investigating Poor Performance Regions of Black Boxes: LIME-based Exploration in Sepsis Detection. (arXiv:2306.12507v1 [cs.LG])
By: <a href="http://arxiv.org/find/cs/1/au:+Salimiparsa_M/0/1/0/all/0/1">Mozhgan Salimiparsa</a>, <a href="http://arxiv.org/find/cs/1/au:+Parmar_S/0/1/0/all/0/1">Surajsinh Parmar</a>, <a href="http://arxiv.org/find/cs/1/au:+Lee_S/0/1/0/all/0/1">San Lee</a>, <a href="http://arxiv.org/find/cs/1/au:+Kim_C/0/1/0/all/0/1">Choongmin Kim</a>, <a href="http://arxiv.org/find/cs/1/au:+Kim_Y/0/1/0/all/0/1">Yonghwan Kim</a>, <a href="http://arxiv.org/find/cs/1/au:+Kim_J/0/1/0/all/0/1">Jang Yong Kim</a> Posted: June 23, 2023

Interpreting machine learning models remains a challenge, hindering their
adoption in clinical settings. This paper proposes leveraging Local
Interpretable Model-Agnostic Explanations (LIME) to provide interpretable
descriptions of black box classification models in high-stakes sepsis
detection. By analyzing misclassified instances, significant features
contributing to suboptimal performance are identified. The analysis reveals
regions where the classifier performs poorly, allowing the calculation of error
rates within these regions. This knowledge is crucial for cautious
decision-making in sepsis detection and other critical applications. The
proposed approach is demonstrated using the eICU dataset, effectively
identifying and visualizing regions where the classifier underperforms. By
enhancing interpretability, our method promotes the adoption of machine
learning models in clinical practice, empowering informed decision-making and
mitigating risks in critical scenarios.

Provided by:
http://arxiv.org/icons/sfx.gif

DoctorMorDi

DoctorMorDi

Moderator and Editor