Deprivation pockets through the lens of convolutional neural networks

Jiong Wang*, M. Kuffer, Debraj Roy, K. Pfeffer

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

11 Citations (Scopus)
10 Downloads (Pure)


Machine learning techniques have been frequently applied to map urban deprivation (commonly referred to as slums) in very high-resolution satellite images. Among these, Deep Convolutional Neural Networks have shown exceptional efficiency in automated deprivation mapping at the local scale. Yet these networks have never been used to map very small heterogeneous deprivation areas (pockets) at large scale. This study proposes and evaluates a U-Net-Compound model to map deprivation pockets in Bangalore, India. The model only relies on RGB satellite images with a resolution of 2 m as these are more commonly accessible to local urban planning departments. The experiment assumes a practical situation where only limited reference data is available for the model to learn the spatial morphology of deprivation pockets. It tests whether an updated map of deprivation pockets can be obtained with limited information. The model performance to map a large number of deprivation pockets is examined by incrementally changing the model architecture and the amount of training data. Results show that the proposed model is sensitive to the amount of spatial information contained in the training data. Once sufficient spatial information is learnt through a few samples, the city scale mapping accuracy outperforms existing models in mapping small deprivation pockets, achieving a Jaccard Index of 54%. This study demonstrated that a well-designed convolutional neural network can map the existence, extent, as well as distribution patterns of deprivation pockets at the city scale with limited training data, which is essential for upscaling research outputs to provide important information for the formulation of pro-poor policies.
Original languageEnglish
Article number111448
Pages (from-to)1-16
Number of pages16
JournalRemote sensing of environment
Early online date20 Oct 2019
Publication statusPublished - 1 Dec 2019


  • UT-Hybrid-D

Fingerprint Dive into the research topics of 'Deprivation pockets through the lens of convolutional neural networks'. Together they form a unique fingerprint.

Cite this