A deep learning approach to DTM extraction from imagery using rule-based training labels

C.m. Gevaert (Corresponding Author), C. Persello, F. Nex, G. Vosselman

Research output: Contribution to journalArticleAcademicpeer-review

15 Citations (Scopus)
3 Downloads (Pure)

Abstract


Existing algorithms for Digital Terrain Model (DTM) extraction still face difficulties due to data outliers and geometric ambiguities in the scene such as contiguous off-ground areas or sloped environments. We postulate that in such challenging cases, the radiometric information contained in aerial imagery may be leveraged to distinguish between ground and off-ground objects. We propose a method for DTM extraction from imagery which first applies morphological filters to the Digital Surface Model to obtain candidate ground and off-ground training samples. These samples are used to train a Fully Convolutional Network (FCN) in the second step, which can then be used to identify ground samples for the entire dataset. The proposed method harnesses the power of state-of-the-art deep learning methods, while showing how they can be adapted to the application of DTM extraction by (i) automatically selecting and labelling dataset-specific samples which can be used to train the network, and (ii) adapting the network architecture to consider a larger surface area without unnecessarily increasing the computational burden. The method is successfully tested on four datasets, indicating that the automatic labelling strategy can achieve an accuracy which is comparable to the use of manually labelled training samples. Furthermore, we demonstrate that the proposed method outperforms two reference DTM extraction algorithms in challenging areas.
Original languageEnglish
Pages (from-to)106-123
Number of pages18
JournalISPRS journal of photogrammetry and remote sensing
Volume142
Early online date15 Jun 2018
DOIs
Publication statusPublished - 1 Aug 2018

Keywords

  • ITC-ISI-JOURNAL-ARTICLE

Fingerprint Dive into the research topics of 'A deep learning approach to DTM extraction from imagery using rule-based training labels'. Together they form a unique fingerprint.

  • Cite this