Fairness and accountability of AI in disaster risk management: Opportunities and challenges

C.M. Gevaert*, Mary Carman, Benjamin Rosman, Y. Georgiadou, Robert Soden

*Corresponding author for this work

Research output: Contribution to journalReview articleAcademicpeer-review

23 Citations (Scopus)
89 Downloads (Pure)


Disaster risk management (DRM) seeks to help societies prepare for, mitigate, or recover from the adverse impacts of disasters and climate change. Core to DRM are disaster risk models that rely heavily on geospatial data about the natural and built environments. Developers are increasingly turning to artificial intelligence (AI) to improve the quality of these models. Yet, there is still little understanding of how the extent of hidden geospatial biases affects disaster risk models and how accountability relationships are affected by these emerging actors and methods. In many cases, there is also a disconnect between the algorithm designers and the communities where the research is conducted or algorithms are implemented. This perspective highlights emerging concerns about the use of AI in DRM. We discuss potential concerns and illustrate what must be considered from a data science, ethical, and social perspective to ensure the responsible usage of AI in this field.

Original languageEnglish
Article number100363
Pages (from-to)1-9
Number of pages9
Issue number11
Publication statusPublished - 12 Nov 2021


  • Accountability
  • Artificial Intelligence (AI)
  • Disaster risk management
  • DRM
  • DSML 1: Concept: Basic principles of a new data science output observed and reported
  • Geospatial
  • Values
  • UT-Gold-D


Dive into the research topics of 'Fairness and accountability of AI in disaster risk management: Opportunities and challenges'. Together they form a unique fingerprint.

Cite this