TY - JOUR
T1 - Mapping indoor spaces by adaptive coarse-to-fine registration of RGB-D data
AU - dos Santos, D.R.
AU - Brasso, M.A.
AU - Khoshelham, K.
AU - de Oliveira, E.
AU - Pavan, N.L.
AU - Vosselman, G.
PY - 2016
Y1 - 2016
N2 - In this letter, we present an adaptive coarse-to-fine registration method for 3-D indoor mapping using RGB-D data. We weight the 3-D points based on the theoretical random error of depth measurements and introduce a novel disparity-based model for an accurate and robust coarse-to-fine registration. Some feature extraction methods required by the method are also presented. First, our method exploits both visual and depth information to compute the initial transformation parameters. We employ scale-invariant feature transformation for extracting, detecting, and matching 2-D visual features, and their associated depth values are used to perform coarse registration. Then, we use an image-based segmentation technique for detecting regions in the RGB images. Their associated 3-D centroid and the correspondent disparity values are used to refine the initial transformation parameters. Finally, the loop-closure detection and a global adjustment of the complete sequence data are used to recognize when the camera has returned to a previously visited location and minimize the registration errors. The effectiveness of the proposed method is demonstrated with the Kinect data set. The experimental results show that the proposed method can properly map the indoor environment with a relative and absolute accuracy value of around 3-5 cm, respectively.
AB - In this letter, we present an adaptive coarse-to-fine registration method for 3-D indoor mapping using RGB-D data. We weight the 3-D points based on the theoretical random error of depth measurements and introduce a novel disparity-based model for an accurate and robust coarse-to-fine registration. Some feature extraction methods required by the method are also presented. First, our method exploits both visual and depth information to compute the initial transformation parameters. We employ scale-invariant feature transformation for extracting, detecting, and matching 2-D visual features, and their associated depth values are used to perform coarse registration. Then, we use an image-based segmentation technique for detecting regions in the RGB images. Their associated 3-D centroid and the correspondent disparity values are used to refine the initial transformation parameters. Finally, the loop-closure detection and a global adjustment of the complete sequence data are used to recognize when the camera has returned to a previously visited location and minimize the registration errors. The effectiveness of the proposed method is demonstrated with the Kinect data set. The experimental results show that the proposed method can properly map the indoor environment with a relative and absolute accuracy value of around 3-5 cm, respectively.
KW - ITC-ISI-JOURNAL-ARTICLE
KW - 2023 OA procedure
UR - https://ezproxy2.utwente.nl/login?url=https://webapps.itc.utwente.nl/library/2016/isi/vosselman_map.pdf
U2 - 10.1109/LGRS.2015.2508880
DO - 10.1109/LGRS.2015.2508880
M3 - Article
SN - 1545-598X
VL - 13
SP - 262
EP - 266
JO - IEEE geoscience and remote sensing letters
JF - IEEE geoscience and remote sensing letters
IS - 2
ER -