Mapping indoor spaces by adaptive coarse-to-fine registration of RGB-D data

D.R. dos Santos, M.A. Brasso, K. Khoshelham, E. de Oliveira, N.L. Pavan, G. Vosselman

Research output: Contribution to journalArticleAcademicpeer-review

38 Citations (Scopus)
43 Downloads (Pure)

Abstract

In this letter, we present an adaptive coarse-to-fine registration method for 3-D indoor mapping using RGB-D data. We weight the 3-D points based on the theoretical random error of depth measurements and introduce a novel disparity-based model for an accurate and robust coarse-to-fine registration. Some feature extraction methods required by the method are also presented. First, our method exploits both visual and depth information to compute the initial transformation parameters. We employ scale-invariant feature transformation for extracting, detecting, and matching 2-D visual features, and their associated depth values are used to perform coarse registration. Then, we use an image-based segmentation technique for detecting regions in the RGB images. Their associated 3-D centroid and the correspondent disparity values are used to refine the initial transformation parameters. Finally, the loop-closure detection and a global adjustment of the complete sequence data are used to recognize when the camera has returned to a previously visited location and minimize the registration errors. The effectiveness of the proposed method is demonstrated with the Kinect data set. The experimental results show that the proposed method can properly map the indoor environment with a relative and absolute accuracy value of around 3-5 cm, respectively.
Original languageEnglish
Pages (from-to)262-266
Number of pages5
JournalIEEE geoscience and remote sensing letters
Volume13
Issue number2
DOIs
Publication statusPublished - 2016

Keywords

  • ITC-ISI-JOURNAL-ARTICLE
  • 2023 OA procedure

Fingerprint

Dive into the research topics of 'Mapping indoor spaces by adaptive coarse-to-fine registration of RGB-D data'. Together they form a unique fingerprint.

Cite this