Prototype-Based Interpretable Breast Cancer Prediction Models: Analysis and Challenges

Shreyasi Pathak*, Jörg Schlötterer, Jeroen Veltman, Jeroen Geerdink, Maurice van Keulen, Christin Seifert

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

1 Citation (Scopus)

Abstract

Deep learning models have achieved high performance in medical applications, however, their adoption in clinical practice is hindered due to their black-box nature. Using explainable AI (XAI) in high-stake medical decisions could increase their usability in clinical settings. Self-explainable models, like prototype-based models, can be especially beneficial as they are interpretable by design. However, if the learnt prototypes are of low quality then the prototype-based models are as good as black-box. Having high quality prototypes is a pre-requisite for a truly interpretable model. In this work, we propose a prototype evaluation framework for Coherence (PEF-Coh) for quantitatively evaluating the quality of the prototypes based on domain knowledge. We show the use of PEF-Coh in the context of breast cancer prediction using mammography. Existing works on prototype-based models on breast cancer prediction using mammography have focused on improving the classification performance of prototype-based models compared to black-box models and have evaluated prototype quality through anecdotal evidence. We are the first to go beyond anecdotal evidence and evaluate the quality of the mammography prototypes systematically using our PEF-Coh. Specifically, we apply three state-of-the-art prototype-based models, ProtoPNet, BRAIxProtoPNet++ and PIP-Net on mammography images for breast cancer prediction and evaluate these models w.r.t. i) classification performance, and ii) quality of the prototypes, on three public datasets. Our results show that prototype-based models are competitive with black-box models in terms of classification performance, and achieve a higher score in detecting ROIs. However, the quality of the prototypes are not yet sufficient and can be improved in aspects of relevance, purity and learning a variety of prototypes. We call the XAI community to systematically evaluate the quality of the prototypes to check their true usability in high stake decisions and improve such models further.

Original languageEnglish
Title of host publicationExplainable Artificial Intelligence - 2nd World Conference, xAI 2024, Proceedings
EditorsLuca Longo, Sebastian Lapuschkin, Christin Seifert
PublisherSpringer
Pages21-42
Number of pages22
ISBN (Print)9783031637865
DOIs
Publication statusPublished - 10 Jul 2024
Event2nd XAI World Conference 2024 - Malta, Valletta, Valletta, Malta
Duration: 16 Jul 202419 Jul 2024
Conference number: 2
https://xaiworldconference.com/2024/doctoral-consortium/

Publication series

NameCommunications in Computer and Information Science
Volume2153 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference2nd XAI World Conference 2024
Country/TerritoryMalta
CityValletta
Period16/07/2419/07/24
Internet address

Keywords

  • 2024 OA procedure
  • Explainable AI
  • Mammography
  • Prototype-based models
  • Breast cancer prediction

Fingerprint

Dive into the research topics of 'Prototype-Based Interpretable Breast Cancer Prediction Models: Analysis and Challenges'. Together they form a unique fingerprint.

Cite this