Fully Convolutional Networks for Ground Classification from Airborne Laser Scanner data

Research output: Contribution to conferenceAbstractOther research output

Abstract

We present an efficient procedure to classify airborne laser scanner data into ground and non-ground. The classification is performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on AHN-3 data resulting in 4.02% of total error, 2.15% of type I error and 6.14% of type II error. We show that this method can be extended to further classify the point cloud into buildings and vegetation.
Original languageEnglish
Pages5
Number of pages1
Publication statusPublished - 28 Nov 2018
EventNCG symposium 2018 - Wageningen university, Wageningen, Netherlands
Duration: 29 Nov 201829 Nov 2018
https://ncgeo.nl/index.php/nl/actueel/nieuws/item/2781-programma-ncg-symposium-2018

Conference

ConferenceNCG symposium 2018
CountryNetherlands
CityWageningen
Period29/11/1829/11/18
Internet address

Fingerprint

scanner
laser
image classification
pixel
software
vegetation
method

Cite this

@conference{d1a8ff2fede642118bc99af67efb0aa2,
title = "Fully Convolutional Networks for Ground Classification from Airborne Laser Scanner data",
abstract = "We present an efficient procedure to classify airborne laser scanner data into ground and non-ground. The classification is performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on AHN-3 data resulting in 4.02{\%} of total error, 2.15{\%} of type I error and 6.14{\%} of type II error. We show that this method can be extended to further classify the point cloud into buildings and vegetation.",
author = "Aldino Rizaldy and C. Persello and C.M. Gevaert and {Oude Elberink}, S.J. and G. Vosselman",
year = "2018",
month = "11",
day = "28",
language = "English",
pages = "5",
note = "NCG symposium 2018 ; Conference date: 29-11-2018 Through 29-11-2018",
url = "https://ncgeo.nl/index.php/nl/actueel/nieuws/item/2781-programma-ncg-symposium-2018",

}

Fully Convolutional Networks for Ground Classification from Airborne Laser Scanner data. / Rizaldy, Aldino; Persello, C.; Gevaert, C.M.; Oude Elberink, S.J.; Vosselman, G.

2018. 5 Abstract from NCG symposium 2018 , Wageningen, Netherlands.

Research output: Contribution to conferenceAbstractOther research output

TY - CONF

T1 - Fully Convolutional Networks for Ground Classification from Airborne Laser Scanner data

AU - Rizaldy, Aldino

AU - Persello, C.

AU - Gevaert, C.M.

AU - Oude Elberink, S.J.

AU - Vosselman, G.

PY - 2018/11/28

Y1 - 2018/11/28

N2 - We present an efficient procedure to classify airborne laser scanner data into ground and non-ground. The classification is performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on AHN-3 data resulting in 4.02% of total error, 2.15% of type I error and 6.14% of type II error. We show that this method can be extended to further classify the point cloud into buildings and vegetation.

AB - We present an efficient procedure to classify airborne laser scanner data into ground and non-ground. The classification is performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on AHN-3 data resulting in 4.02% of total error, 2.15% of type I error and 6.14% of type II error. We show that this method can be extended to further classify the point cloud into buildings and vegetation.

UR - https://ezproxy2.utwente.nl/login?url=https://webapps.itc.utwente.nl/library/2018/pres/persello_ful_abs.pdf

M3 - Abstract

SP - 5

ER -