TY - CHAP
T1 - Fully Convolutional Networks for Ground Classification from LiDAR Point Clouds
AU - Rizaldi, Aldino
AU - Persello, C.
AU - Gevaert, C.M.
AU - Oude Elberink, S.J.
PY - 2018/6/4
Y1 - 2018/6/4
N2 - Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22% of total error, 4.10% of type I error, and 15.07% of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02% of total error, 2.15% of type I error and 6.14% of type II error.
AB - Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22% of total error, 4.10% of type I error, and 15.07% of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02% of total error, 2.15% of type I error and 6.14% of type II error.
KW - ITC-GOLD
UR - https://ezproxy2.utwente.nl/login?url=https://webapps.itc.utwente.nl/library/2018/chap/persello_ful.pdf
U2 - 10.5194/isprs-annals-IV-2-231-2018
DO - 10.5194/isprs-annals-IV-2-231-2018
M3 - Chapter
VL - 4
T3 - ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
SP - 231
EP - 238
BT - 2018 ISPRS TC II Mid-term Symposium “Towards Photogrammetry 2020”, 4–7 June 2018, Riva del Garda, Italy
PB - International Society for Photogrammetry and Remote Sensing (ISPRS)
CY - Riva Del Garda
ER -