Fusion of camera images and laser scans for wide baseline 3D scene alignment in urban environments

Michael Ying Yang, Yanpeng Cao, John Mcdonald

Research output: Contribution to journalArticleAcademicpeer-review

25 Citations (Scopus)

Abstract

In this paper we address the problem of automatic laser scan registration in urban environments. This represents a challenging problem for two major reasons. First, two individual laser scans might be captured at significantly changed viewpoints (wide baseline) and have very little overlap. Second, man-made buildings usually contain many structures of similar appearances. This will result in considerable aliasing in the matching process. By sensor fusion of laser data with camera images, we propose a novel improvement to the existing 2D feature techniques to enable automatic 3D alignment between two widely separated scans. The key idea consists of extracting dominant planar structures from 3D point clouds and then utilizing the recovered 3D geometry to improve the performance of 2D image feature for wide baseline matching. The resulting feature descriptors become more robust to camera viewpoint changes after the procedure of viewpoint normalization. Moreover, the viewpoint normalized 2D features provide robust local feature information including patch scale and dominant orientation for effective repetitive structure matching in man-made environments. Comprehensive experimental evaluations with real data demonstrate the potential of the proposed method for automatic wide baseline 3D scan alignment in urban environments.
Original languageEnglish
Pages (from-to)S52-S61
Number of pages10
JournalISPRS journal of photogrammetry and remote sensing
Volume66
Issue number6
DOIs
Publication statusPublished - 1 Dec 2011

Keywords

  • ITC-ISI-JOURNAL-ARTICLE

Fingerprint

Dive into the research topics of 'Fusion of camera images and laser scans for wide baseline 3D scene alignment in urban environments'. Together they form a unique fingerprint.

Cite this