Abstract
Deep learning models have achieved high performance in medical applications, however, their adoption in clinical practice is hindered due to their black-box nature. Using explainable AI (XAI) in high-stake medical decisions could increase their usability in clinical settings. Self-explainable models, like prototype-based models, can be especially beneficial as they are interpretable by design. However, if the learnt prototypes are of low quality then the prototype-based models are as good as black-box. Having high quality prototypes is a pre-requisite for a truly interpretable model. In this work, we propose a prototype evaluation framework for Coherence (PEF-Coh) for quantitatively evaluating the quality of the prototypes based on domain knowledge. We show the use of PEF-Coh in the context of breast cancer prediction using mammography. Existing works on prototype-based models on breast cancer prediction using mammography have focused on improving the classification performance of prototype-based models compared to black-box models and have evaluated prototype quality through anecdotal evidence. We are the first to go beyond anecdotal evidence and evaluate the quality of the mammography prototypes systematically using our PEF-Coh. Specifically, we apply three state-of-the-art prototype-based models, ProtoPNet, BRAIxProtoPNet++ and PIP-Net on mammography images for breast cancer prediction and evaluate these models w.r.t. i) classification performance, and ii) quality of the prototypes, on three public datasets. Our results show that prototype-based models are competitive with black-box models in terms of classification performance, and achieve a higher score in detecting ROIs. However, the quality of the prototypes are not yet sufficient and can be improved in aspects of relevance, purity and learning a variety of prototypes. We call the XAI community to systematically evaluate the quality of the prototypes to check their true usability in high stake decisions and improve such models further.
Original language | English |
---|---|
Title of host publication | Explainable Artificial Intelligence - 2nd World Conference, xAI 2024, Proceedings |
Editors | Luca Longo, Sebastian Lapuschkin, Christin Seifert |
Publisher | Springer |
Pages | 21-42 |
Number of pages | 22 |
ISBN (Print) | 9783031637865 |
DOIs | |
Publication status | Published - 10 Jul 2024 |
Event | 2nd XAI World Conference 2024 - Malta, Valletta, Valletta, Malta Duration: 16 Jul 2024 → 19 Jul 2024 Conference number: 2 https://xaiworldconference.com/2024/doctoral-consortium/ |
Publication series
Name | Communications in Computer and Information Science |
---|---|
Volume | 2153 CCIS |
ISSN (Print) | 1865-0929 |
ISSN (Electronic) | 1865-0937 |
Conference
Conference | 2nd XAI World Conference 2024 |
---|---|
Country/Territory | Malta |
City | Valletta |
Period | 16/07/24 → 19/07/24 |
Internet address |
Keywords
- 2024 OA procedure
- Explainable AI
- Mammography
- Prototype-based models
- Breast cancer prediction