Abstract
Machine Learning (ML) can improve the diagnosis, treatment decisions, and understanding of cancer. However, the low explainability of how “black box” ML methods produce their output hinders their clinical adoption. In this paper, we used data from the Netherlands Cancer Registry to generate a ML-based model to predict 10-year overall survival of breast cancer patients. Then, we used Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) to interpret the model's predictions. We found that, overall, LIME and SHAP tend to be consistent when explaining the contribution of different features. Nevertheless, the feature ranges where they have a mismatch can also be of interest, since they can help us identifying “turning points” where features go from favoring survived to favoring deceased (or vice versa). Explainability techniques can pave the way for better acceptance of ML techniques. However, their evaluation and translation to real-life scenarios need to be researched further.
Original language | English |
---|---|
Title of host publication | Digital Personalized Health and Medicine - Proceedings of MIE 2020 |
Editors | Louise B. Pape-Haugaard, Christian Lovis, Inge Cort Madsen, Patrick Weber, Per Hostrup Nielsen, Philip Scott |
Publisher | IOS |
Pages | 307-311 |
Number of pages | 5 |
ISBN (Electronic) | 9781643680828 |
DOIs | |
Publication status | Published - 16 Jun 2020 |
Event | 30th Medical Informatics Europe Conference, MIE 2020 - Canceled, Geneva, Switzerland Duration: 28 Apr 2020 → 1 May 2020 Conference number: 30 |
Publication series
Name | Studies in Health Technology and Informatics |
---|---|
Volume | 270 |
ISSN (Print) | 0926-9630 |
ISSN (Electronic) | 1879-8365 |
Conference
Conference | 30th Medical Informatics Europe Conference, MIE 2020 |
---|---|
Abbreviated title | MIE 2020 |
Country/Territory | Switzerland |
City | Geneva |
Period | 28/04/20 → 1/05/20 |
Keywords
- Artificial Intelligence
- Interpretability
- Oncology
- Prediction model