TY - JOUR
T1 - Explainable artificial intelligence in geoscience
T2 - A glimpse into the future of landslide susceptibility modeling
AU - Dahal, A.
AU - Lombardo, L.
PY - 2023/7
Y1 - 2023/7
N2 - For decades, the distinction between statistical models and machine learning ones has been clear. The former are optimized to produce interpretable results, whereas the latter seeks to maximize the predictive performance of the task at hand. This is valid for any scientific field and for any method belonging to the two categories mentioned above. When attempting to predict natural hazards, this difference has lead researchers to make drastic decisions on which aspect to prioritize, a difficult choice to make. In fact, one would always seek the highest performance because at higher performances correspond better decisions for disaster risk reduction. However, scientists also wish to understand the results, as a way to rely on the tool they developed. Today, very recent development in deep learning have brought forward a new generation of interpretable artificial intelligence, where the prediction power typical of machine learning tools is equipped with a level of explanatory power typical of statistical approaches. In this work, we attempt to demonstrate the capabilities of this new generation of explainable artificial intelligence (XAI). To do so, we take the landslide susceptibility context as reference. Specifically, we build an XAI trained to model landslides occurred in response to the Gorkha earthquake (April 25, 2015), providing an educational overview of the model design and its querying opportunities. The results show high performance, with an AUC score of 0.89, while the interpretability can be extended to the probabilistic result assigned to single mapping units.
AB - For decades, the distinction between statistical models and machine learning ones has been clear. The former are optimized to produce interpretable results, whereas the latter seeks to maximize the predictive performance of the task at hand. This is valid for any scientific field and for any method belonging to the two categories mentioned above. When attempting to predict natural hazards, this difference has lead researchers to make drastic decisions on which aspect to prioritize, a difficult choice to make. In fact, one would always seek the highest performance because at higher performances correspond better decisions for disaster risk reduction. However, scientists also wish to understand the results, as a way to rely on the tool they developed. Today, very recent development in deep learning have brought forward a new generation of interpretable artificial intelligence, where the prediction power typical of machine learning tools is equipped with a level of explanatory power typical of statistical approaches. In this work, we attempt to demonstrate the capabilities of this new generation of explainable artificial intelligence (XAI). To do so, we take the landslide susceptibility context as reference. Specifically, we build an XAI trained to model landslides occurred in response to the Gorkha earthquake (April 25, 2015), providing an educational overview of the model design and its querying opportunities. The results show high performance, with an AUC score of 0.89, while the interpretability can be extended to the probabilistic result assigned to single mapping units.
KW - UT-Hybrid-D
KW - ITC-ISI-JOURNAL-ARTICLE
KW - ITC-HYBRID
U2 - 10.1016/j.cageo.2023.105364
DO - 10.1016/j.cageo.2023.105364
M3 - Article
SN - 0098-3004
VL - 176
JO - Computers & geosciences
JF - Computers & geosciences
M1 - 105364
ER -