Feature Importance to Explain Multimodal Prediction Models: a Clinical Use Case

Jorn-Jan van de Beld*, Shreyasi Pathak, Jeroen Geerdink, Johannes H. Hegeman, Christin Seifert

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

Abstract

Surgery to treat elderly hip fracture patients may cause complications that can lead to early mortality. An early warning system for complications could provoke clinicians to monitor high-risk patients more carefully and address potential complications early, or inform the patient. In this work, we develop a multimodal deep-learning model for post-operative mortality prediction using pre-operative and per-operative data from elderly hip fracture patients. Specifically, we include static patient data, hip and chest images before surgery in pre-operative data, vital signals, and medications administered during surgery in per-operative data. We extract features from image modalities using ResNet and from vital signals using LSTM. Explainable model outcomes are essential for clinical applicability, therefore we compute Shapley values to explain the predictions of our multimodal black box model. We find that i) Shapley values can be used to estimate the relative contribution of each modality both locally and globally, and ii) a modified version of the chain rule can be used to propagate Shapley values through a sequence of models supporting interpretable local explanations. Our findings imply that a multimodal combination of black box models can be explained by propagating Shapley values through the model sequence.
Original languageEnglish
Title of host publicationExplainable Artificial Intelligence
Subtitle of host publicationSecond World Conference, xAI 2024. Valletta, Malta, July 17-19, 2024. Proceedings, Part IV
EditorsLuca Longo, Sebastian Lapuschkin, Christin Seifert
Place of PublicationCham, Switzerland
PublisherSpringer Nature
Pages84-101
Number of pages18
ISBN (Electronic)978-3-031-63803-9
ISBN (Print)978-3-031-63802-2
DOIs
Publication statusPublished - 10 Jul 2024
EventThe 2nd World Conference on eXplainable Artificial Intelligence - Valetta, Malta
Duration: 17 Jul 202419 Jul 2024
https://xaiworldconference.com/2024/

Publication series

NameCommunications in Computer and Information Science
PublisherSpringer
Volume2156
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

ConferenceThe 2nd World Conference on eXplainable Artificial Intelligence
Country/TerritoryMalta
CityValetta
Period17/07/2419/07/24
Internet address

Keywords

  • 2024 OA procedure

Fingerprint

Dive into the research topics of 'Feature Importance to Explain Multimodal Prediction Models: a Clinical Use Case'. Together they form a unique fingerprint.

Cite this