Fusion of camera images and laser scans for wide baseline 3D scene alignment in urban environments

Michael Ying Yang, Yanpeng Cao, John Mcdonald

Research output: Contribution to journalArticleAcademicpeer-review

20 Citations (Scopus)

Abstract

In this paper we address the problem of automatic laser scan registration in urban environments. This represents a challenging problem for two major reasons. First, two individual laser scans might be captured at significantly changed viewpoints (wide baseline) and have very little overlap. Second, man-made buildings usually contain many structures of similar appearances. This will result in considerable aliasing in the matching process. By sensor fusion of laser data with camera images, we propose a novel improvement to the existing 2D feature techniques to enable automatic 3D alignment between two widely separated scans. The key idea consists of extracting dominant planar structures from 3D point clouds and then utilizing the recovered 3D geometry to improve the performance of 2D image feature for wide baseline matching. The resulting feature descriptors become more robust to camera viewpoint changes after the procedure of viewpoint normalization. Moreover, the viewpoint normalized 2D features provide robust local feature information including patch scale and dominant orientation for effective repetitive structure matching in man-made environments. Comprehensive experimental evaluations with real data demonstrate the potential of the proposed method for automatic wide baseline 3D scan alignment in urban environments.
Original languageEnglish
Pages (from-to)S52-S61
Number of pages10
JournalISPRS journal of photogrammetry and remote sensing
Volume66
Issue number6
DOIs
Publication statusPublished - 1 Dec 2011

Fingerprint

Fusion reactions
fusion
laser
Cameras
cameras
alignment
Lasers
lasers
multisensor fusion
planar structures
sensor
geometry
Geometry
evaluation
Sensors
method
registration
normalisation

Keywords

  • ITC-ISI-JOURNAL-ARTICLE

Cite this

@article{aec98d070eca4a019026bb6b48bf223e,
title = "Fusion of camera images and laser scans for wide baseline 3D scene alignment in urban environments",
abstract = "In this paper we address the problem of automatic laser scan registration in urban environments. This represents a challenging problem for two major reasons. First, two individual laser scans might be captured at significantly changed viewpoints (wide baseline) and have very little overlap. Second, man-made buildings usually contain many structures of similar appearances. This will result in considerable aliasing in the matching process. By sensor fusion of laser data with camera images, we propose a novel improvement to the existing 2D feature techniques to enable automatic 3D alignment between two widely separated scans. The key idea consists of extracting dominant planar structures from 3D point clouds and then utilizing the recovered 3D geometry to improve the performance of 2D image feature for wide baseline matching. The resulting feature descriptors become more robust to camera viewpoint changes after the procedure of viewpoint normalization. Moreover, the viewpoint normalized 2D features provide robust local feature information including patch scale and dominant orientation for effective repetitive structure matching in man-made environments. Comprehensive experimental evaluations with real data demonstrate the potential of the proposed method for automatic wide baseline 3D scan alignment in urban environments.",
keywords = "ITC-ISI-JOURNAL-ARTICLE",
author = "Yang, {Michael Ying} and Yanpeng Cao and John Mcdonald",
year = "2011",
month = "12",
day = "1",
doi = "10.1016/j.isprsjprs.2011.09.004",
language = "English",
volume = "66",
pages = "S52--S61",
journal = "ISPRS journal of photogrammetry and remote sensing",
issn = "0924-2716",
publisher = "Elsevier",
number = "6",

}

Fusion of camera images and laser scans for wide baseline 3D scene alignment in urban environments. / Yang, Michael Ying; Cao, Yanpeng; Mcdonald, John.

In: ISPRS journal of photogrammetry and remote sensing, Vol. 66, No. 6, 01.12.2011, p. S52-S61.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - Fusion of camera images and laser scans for wide baseline 3D scene alignment in urban environments

AU - Yang, Michael Ying

AU - Cao, Yanpeng

AU - Mcdonald, John

PY - 2011/12/1

Y1 - 2011/12/1

N2 - In this paper we address the problem of automatic laser scan registration in urban environments. This represents a challenging problem for two major reasons. First, two individual laser scans might be captured at significantly changed viewpoints (wide baseline) and have very little overlap. Second, man-made buildings usually contain many structures of similar appearances. This will result in considerable aliasing in the matching process. By sensor fusion of laser data with camera images, we propose a novel improvement to the existing 2D feature techniques to enable automatic 3D alignment between two widely separated scans. The key idea consists of extracting dominant planar structures from 3D point clouds and then utilizing the recovered 3D geometry to improve the performance of 2D image feature for wide baseline matching. The resulting feature descriptors become more robust to camera viewpoint changes after the procedure of viewpoint normalization. Moreover, the viewpoint normalized 2D features provide robust local feature information including patch scale and dominant orientation for effective repetitive structure matching in man-made environments. Comprehensive experimental evaluations with real data demonstrate the potential of the proposed method for automatic wide baseline 3D scan alignment in urban environments.

AB - In this paper we address the problem of automatic laser scan registration in urban environments. This represents a challenging problem for two major reasons. First, two individual laser scans might be captured at significantly changed viewpoints (wide baseline) and have very little overlap. Second, man-made buildings usually contain many structures of similar appearances. This will result in considerable aliasing in the matching process. By sensor fusion of laser data with camera images, we propose a novel improvement to the existing 2D feature techniques to enable automatic 3D alignment between two widely separated scans. The key idea consists of extracting dominant planar structures from 3D point clouds and then utilizing the recovered 3D geometry to improve the performance of 2D image feature for wide baseline matching. The resulting feature descriptors become more robust to camera viewpoint changes after the procedure of viewpoint normalization. Moreover, the viewpoint normalized 2D features provide robust local feature information including patch scale and dominant orientation for effective repetitive structure matching in man-made environments. Comprehensive experimental evaluations with real data demonstrate the potential of the proposed method for automatic wide baseline 3D scan alignment in urban environments.

KW - ITC-ISI-JOURNAL-ARTICLE

UR - https://ezproxy2.utwente.nl/login?url=https://doi.org/10.1016/j.isprsjprs.2011.09.004

UR - https://ezproxy2.utwente.nl/login?url=https://library.itc.utwente.nl/login/2011/isi/yang_fus.pdf

U2 - 10.1016/j.isprsjprs.2011.09.004

DO - 10.1016/j.isprsjprs.2011.09.004

M3 - Article

VL - 66

SP - S52-S61

JO - ISPRS journal of photogrammetry and remote sensing

JF - ISPRS journal of photogrammetry and remote sensing

SN - 0924-2716

IS - 6

ER -