TY - JOUR
T1 - Explainable AI in medical imaging
T2 - An overview for clinical practitioners – Beyond saliency-based XAI approaches
AU - Borys, Katarzyna
AU - Schmitt, Yasmin Alyssa
AU - Nauta, Meike
AU - Seifert, Christin
AU - Krämer, Nicole
AU - Friedrich, Christoph M.
AU - Nensa, Felix
N1 - Publisher Copyright:
© 2023 Elsevier B.V.
PY - 2023/5/1
Y1 - 2023/5/1
N2 - Driven by recent advances in Artificial Intelligence (AI) and Computer Vision (CV), the implementation of AI systems in the medical domain increased correspondingly. This is especially true for the domain of medical imaging, in which the incorporation of AI aids several imaging-based tasks such as classification, segmentation, and registration. Moreover, AI reshapes medical research and contributes to the development of personalized clinical care. Consequently, alongside its extended implementation arises the need for an extensive understanding of AI systems and their inner workings, potentials, and limitations which the field of eXplainable AI (XAI) aims at. Because medical imaging is mainly associated with visual tasks, most explainability approaches incorporate saliency-based XAI methods. In contrast to that, in this article we would like to investigate the full potential of XAI methods in the field of medical imaging by specifically focusing on XAI techniques not relying on saliency, and providing diversified examples. We dedicate our investigation to a broad audience, but particularly healthcare professionals. Moreover, this work aims at establishing a common ground for cross-disciplinary understanding and exchange across disciplines between Deep Learning (DL) builders and healthcare professionals, which is why we aimed for a non-technical overview. Presented XAI methods are divided by a method's output representation into the following categories: Case-based explanations, textual explanations, and auxiliary explanations.
AB - Driven by recent advances in Artificial Intelligence (AI) and Computer Vision (CV), the implementation of AI systems in the medical domain increased correspondingly. This is especially true for the domain of medical imaging, in which the incorporation of AI aids several imaging-based tasks such as classification, segmentation, and registration. Moreover, AI reshapes medical research and contributes to the development of personalized clinical care. Consequently, alongside its extended implementation arises the need for an extensive understanding of AI systems and their inner workings, potentials, and limitations which the field of eXplainable AI (XAI) aims at. Because medical imaging is mainly associated with visual tasks, most explainability approaches incorporate saliency-based XAI methods. In contrast to that, in this article we would like to investigate the full potential of XAI methods in the field of medical imaging by specifically focusing on XAI techniques not relying on saliency, and providing diversified examples. We dedicate our investigation to a broad audience, but particularly healthcare professionals. Moreover, this work aims at establishing a common ground for cross-disciplinary understanding and exchange across disciplines between Deep Learning (DL) builders and healthcare professionals, which is why we aimed for a non-technical overview. Presented XAI methods are divided by a method's output representation into the following categories: Case-based explanations, textual explanations, and auxiliary explanations.
KW - Black-Box
KW - Explainability
KW - Explainable AI
KW - Interpretability
KW - Medical imaging
KW - Radiology
KW - 2023 OA procedure
UR - http://www.scopus.com/inward/record.url?scp=85151029301&partnerID=8YFLogxK
U2 - 10.1016/j.ejrad.2023.110786
DO - 10.1016/j.ejrad.2023.110786
M3 - Review article
AN - SCOPUS:85151029301
SN - 0720-048X
VL - 162
JO - European journal of radiology
JF - European journal of radiology
M1 - 110786
ER -