Fusion of optical and liDAR images for urban objects recognition

W. Liao*, F. Van Coillie, H. Zhang, S. Gautama, W. Philips

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Nowadays, advanced sensor technology and image processing algorithms allow us to measure different aspects of the objects on the Earth’s surface, from spectral characteristics in optical images, height information in LiDAR data, to spatial information generated by image processing technologies like commercial software eCognition®. However, automatic recognition of objects in remote sensed scenes remains challenging. It is clear that single technology might not be sufficient to obtain reliable classification results (Debes, 2014). Multisensor data, once combined, can contribute to a more comprehensive interpretation of objects on the ground. For example, spectral reflections from optical image cannot recognize objects under shadows, while they can often be easily detected by LiDAR data. On the other hand, LiDAR data alone may fail to discriminate between objects that are quite similar in height.

Stacking multi-source data together is a widely applied data fusion technique for classification. These methods first apply feature extraction on each individual data source, after which all feature sources are concatenated into one stacked vector for classification. While such methods are appealing due to their simplicity, they do not always perform better than using a single data source. This is because the value of different components in the stacked feature vector can be significantly unbalanced. As a consequence, the
information contained by different data sources is not equally represented or measured. Furthermore, the increase in the dimensionality of the stacked features, combined with the limited number of labelled samples, may together lead to the problem of the “curse of dimensionality” (Liao, 2015).

Therefore, we present a local graph fusion method to fuse true orthophoto and LiDAR image for urban object recognition. First, object-based spatial and height information are generated on true orthophoto and LiDAR image, respectively. Second, we build a local fusion graph within a sliding window where only the data points with similar spatial and height characteristics are connected. Finally, we solve the problem of multisensor data fusion by projecting multisensor data into a subspace, on which the advantages of
different data sources are well exploited. Experimental results on fusion of true orthophoto and LiDAR image from 'ISPRS Test Project on Urban Classification and 3D Building Reconstruction' demonstrate the potential of the proposed method. Compared to the methods using only single data source or stacking them together, our approach has significant improvements in overall classification accuracy. Both the method’s details and the results of a comprehensive test will be presented at GEOBIA 2016.
Original languageEnglish
Title of host publicationProceedings of GEOBIA 2016 : Solutions and synergies, 14-16 September 2016, Enschede, Netherlands
EditorsN. Kerle, M. Gerke, S. Lefevre
Place of PublicationEnschede
PublisherUniversity of Twente, Faculty of Geo-Information Science and Earth Observation (ITC)
Number of pages1
ISBN (Print)978-90-365-4201-2
DOIs
Publication statusPublished - 14 Sep 2016
Externally publishedYes
Event6th International Conference on Geographic Object-Based Image Analysis, GEOBIA 2016: Solutions & Synergies - University of Twente Faculty of Geo-Information and Earth Observation (ITC), Enschede, Netherlands
Duration: 14 Sep 201616 Sep 2016
Conference number: 6
https://www.geobia2016.com/

Conference

Conference6th International Conference on Geographic Object-Based Image Analysis, GEOBIA 2016
Abbreviated titleGEOBIA
CountryNetherlands
CityEnschede
Period14/09/1616/09/16
Internet address

Fingerprint

Dive into the research topics of 'Fusion of optical and liDAR images for urban objects recognition'. Together they form a unique fingerprint.

Cite this