Explainable AI for earth observation: A review including societal and regulatory perspectives

C.M. Gevaert*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

2 Citations (Scopus)
261 Downloads (Pure)

Abstract

Artificial intelligence and machine learning are ubiquitous in the domain of Earth Observation (EO) and Remote Sensing. Congruent to their success in the domain of computer vision, they have proven to obtain high accuracies for EO applications. Yet experts of EO should also consider the weaknesses of complex, machine-learning models before adopting them for specific applications. One such weakness is the lack of explainability of complex deep learning models. This paper reviews published examples of explainable ML or explainable AI in the field of Earth Observation. Explainability methods are classified as: intrinsic versus post-hoc, model-specific versus model-agnostic, and global versus local explanations and examples of each type are provided. This paper also identifies key explainability requirements identified the social sciences and upcoming regulatory recommendations from UNESCO Ethics of Artificial Intelligence and requirements from the EU draft Artificial Intelligence Act and analyzes whether these limitations are sufficiently addressed in the field of EO.

The findings indicate that there is a lack of clarity regarding which models can be considered interpretable or not. EO applications often utilize Random Forests as an “interpretable” benchmark algorithm to compare to complex deep-learning models even though social sciences clearly argue that large Random Forests cannot be considered as such. Secondly, most explanations target domain experts and not possible users of the algorithm, regulatory bodies, or those who might be affected by an algorithm’s decisions. Finally, publications tend to simply provide explanations without testing the usefulness of the explanation by the intended audience. In light of these societal and regulatory considerations, a framework is provided to guide the selection of an appropriate machine learning algorithm based on the availability of simpler algorithms with a high predictive accuracy as well as the purpose and intended audience of the explanation.
Original languageEnglish
Article number102869
Number of pages11
JournalInternational Journal of Applied Earth Observation and Geoinformation (JAG)
Volume112
Early online date20 Jun 2022
DOIs
Publication statusPublished - Aug 2022

Keywords

  • ITC-ISI-JOURNAL-ARTICLE
  • ITC-GOLD
  • UT-Gold-D

Fingerprint

Dive into the research topics of 'Explainable AI for earth observation: A review including societal and regulatory perspectives'. Together they form a unique fingerprint.

Cite this