Ground and Multi-Class Classification of Airborne Laser Scanner Point Clouds Using Fully Convolutional Networks

Aldino Rizaldy (Corresponding Author), C. Persello (Corresponding Author), C.M. Gevaert, S.J. Oude Elberink (Corresponding Author), G. Vosselman

Research output: Contribution to journalArticleAcademicpeer-review

44 Citations (Scopus)
151 Downloads (Pure)


Various classification methods have been developed to extract meaningful information from Airborne Laser Scanner (ALS) point clouds. However, the accuracy and the computational efficiency of the existing methods need to be improved, especially for the analysis of large datasets (e.g., at regional or national levels). In this paper, we present a novel deep learning approach to ground classification for Digital Terrain Model (DTM) extraction as well as for multi-class land-cover classification, delivering highly accurate classification results in a computationally efficient manner. Considering the top–down acquisition angle of ALS data, the point cloud is initially projected on the horizontal plane and converted into a multi-dimensional image. Then, classification techniques based on Fully Convolutional Networks (FCN) with dilated kernels are designed to perform pixel-wise image classification. Finally, labels are transferred from pixels to the original ALS points. We also designed a Multi-Scale FCN (MS-FCN) architecture to minimize the loss of information during the point-to-image conversion. In the ground classification experiment, we compared our method to a Convolutional Neural Network (CNN)-based method and LAStools software. We obtained a lower total error on both the International Society for Photogrammetry and Remote Sensing (ISPRS) filter test benchmark dataset and AHN-3 dataset in the Netherlands. In the multi-class classification experiment, our method resulted in higher precision and recall values compared to the traditional machine learning technique using Random Forest (RF); it accurately detected small buildings. The FCN achieved precision and recall values of 0.93 and 0.94 when RF obtained 0.91 and 0.92, respectively. Moreover, our strategy significantly improved the computational efficiency of state-of-the-art CNN-based methods, reducing the point-to-image conversion time from 47 h to 36 min in our experiments on the ISPRS filter test dataset. Misclassification errors remained in situations that were not included in the training dataset, such as large buildings and bridges, or contained noisy measurements.
Original languageEnglish
Article number1723
Pages (from-to)1-27
Number of pages27
JournalRemote sensing
Issue number11
Publication statusPublished - 1 Nov 2018


  • DTM extraction
  • Filtering
  • Classification
  • Deep Learning
  • Convolutional Neural Network


Dive into the research topics of 'Ground and Multi-Class Classification of Airborne Laser Scanner Point Clouds Using Fully Convolutional Networks'. Together they form a unique fingerprint.

Cite this