Abstract
Road surface extraction from remote sensing images using deep learning methods has achieved good performance, while most of the existing methods are based on fully supervised learning, which requires a large amount of training data with laborious per-pixel annotation. In this article, we propose a scribble-based weakly supervised road surface extraction method named ScRoadExtractor, which learns from easily accessible scribbles such as centerlines instead of densely annotated road surface ground truths. To propagate semantic information from sparse scribbles to unlabeled pixels, we introduce a road label propagation algorithm, which considers both the buffer-based properties of road networks and the color and spatial information of super-pixels, to produce a proposal mask with categories road, nonroad, and unknown. The proposal mask, along with the auxiliary boundary prior information detected from images, is utilized to train a dual-branch encoder–decoder network which we designed for precise road surface segmentation. We perform experiments on three diverse road data sets that are comprised of high-resolution remote sensing satellite and aerial images across the world. The results demonstrate that ScRoadExtractor exceeds the classic scribble-supervised segmentation method by 20% for the intersection over union (IoU) indicator and outperforms the state-of-the-art scribble-based weakly supervised methods at least 4%.
Original language | English |
---|---|
Article number | 9372390 |
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | IEEE transactions on geoscience and remote sensing |
Volume | 60 |
DOIs | |
Publication status | Published - 1 Jan 2022 |
Externally published | Yes |
Keywords
- Roads
- Proposals
- Annotations
- Training
- Remote sensing
- Image segmentation
- Semantics
- ITC-CV