Weakly supervised semantic segmentation of airborne laser scanning point clouds

Yaping Lin, George Vosselman, Michael Y. Yang*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

23 Citations (Scopus)
175 Downloads (Pure)

Abstract

While modern deep learning algorithms for semantic segmentation of airborne laser scanning (ALS) point clouds have achieved considerable success, the training process often requires a large number of labelled 3D points. Pointwise annotation of 3D point clouds, especially for large scale ALS datasets, is extremely time-consuming work. Weak supervision that only needs a few annotation efforts but can make networks achieve comparable performance is an alternative solution. Assigning a weak label to a subcloud, a group of points, is an efficient annotation strategy. With the supervision of subcloud labels, we first train a classification network that produces pseudo labels for the training data. Then the pseudo labels are taken as the input of a segmentation network which gives the final predictions on the testing data. As the quality of pseudo labels determines the performance of the segmentation network on testing data, we propose an overlap region loss and an elevation attention unit for the classification network to obtain more accurate pseudo labels. The overlap region loss that considers the nearby subcloud semantic information is introduced to enhance the awareness of the semantic heterogeneity within a subcloud. The elevation attention helps the classification network to encode more representative features for ALS point clouds. For the segmentation network, in order to effectively learn representative features from inaccurate pseudo labels, we adopt a supervised contrastive loss that uncovers the underlying correlations of class-specific features. Extensive experiments on three ALS datasets demonstrate the superior performance of our model to the baseline method (Wei et al., 2020). With the same amount of labelling efforts, for the ISPRS benchmark dataset, the Rotterdam dataset and the DFC2019 dataset, our method rises the overall accuracy by 0.062, 0.112 and 0.031, and the average F1 score by 0.09, 0.178 and 0.043 respectively. Our code is publicly available at ‘https://github.com/yaping222/Weak_ALS.git’.

Original languageEnglish
Pages (from-to)79-100
Number of pages22
JournalISPRS journal of photogrammetry and remote sensing
Volume187
Early online date11 Mar 2022
DOIs
Publication statusPublished - May 2022

Keywords

  • Airborne laser scanning
  • Point clouds
  • Semantic segmentation
  • Subcloud labels
  • Weak supervision
  • UT-Hybrid-D
  • ITC-ISI-JOURNAL-ARTICLE
  • ITC-HYBRID

Fingerprint

Dive into the research topics of 'Weakly supervised semantic segmentation of airborne laser scanning point clouds'. Together they form a unique fingerprint.

Cite this