Abstract

Geospatial data play a central role in disaster risk reduction and management (DRRM). In that context it is used to decide where, when and who should be given aid. Like in other domains, the datafication and digitalization of humanitarian aid raises privacy issues. In our paper, we will explore group data-related harms from demographically identifiable information (DII) in geospatial data. Group privacy is an increasingly important issue in digital humanitarian work because algorithms are concerned with classifying individuals into groups. Consequently, analysis needs to transcend harms caused by individual re-identification and a lack of consent. We will explore four harms posed by commonly used geospatial data: (i) Biases from missing/underrepresented categories. (ii) Mosaic effect – unintentional sensitive knowledge discovery from combining disparate datasets. And AI’s role in facilitating and accelerating data mosaicking which obscures (minute) data problems (e.g., biases) that sum up in the end. (iii) Misuse of data (whether it is shared or not). (iv) Cost-benefit analysis (cost of protection-vs-misuse risk). Using threat modelling methods the paper contributes to the literature on group privacy harms in the humanitarian domain by suggesting an appropriate (geo-)data triage and how this can also be relevant for the wider use of geodata modelling.
Original languageEnglish
Publication statusPublished - 27 Jun 2023
EventDigital Geography Research Group Annual Conference 2023: CAMRI Ethics of the Digital - University of Westminster, London, United Kingdom
Duration: 27 Jun 202328 Jun 2023

Conference

ConferenceDigital Geography Research Group Annual Conference 2023
Country/TerritoryUnited Kingdom
CityLondon
Period27/06/2328/06/23

Fingerprint

Dive into the research topics of 'Threat modelling for geo-spatial data in the humanitarian context'. Together they form a unique fingerprint.

Cite this