Abstract
We introduce a robust framework for learning and fusing of orientation appearance models based on both texture and depth information for rigid object tracking. Our framework fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depth cameras such as the Kinect. To combine these two completely different modalities, we propose to use features that do not depend on the data representation: angles. More specifically, our framework combines image gradient orientations as extracted from intensity images with the directions of surface normals computed from dense depth fields. We propose to capture the correlations between the obtained orientation appearance models using a fusion approach motivated by the original Active Appearance Models (AAMs). To incorporate these features in a learning framework, we use a robust kernel based on the Euler representation of angles which does not require off-line training, and can be efficiently implemented online. The robustness of learning from orientation appearance models is presented both theoretically and experimentally in this work. This kernel enables us to cope with gross measurement errors, missing data as well as other typical problems such as illumination changes and occlusions. By combining the proposed models with a particle filter, the proposed framework was used for performing 2D plus 3D rigid object tracking, achieving robust performance in very difficult tracking scenarios including extreme pose variations.
Original language | Undefined |
---|---|
Pages (from-to) | 707-727 |
Number of pages | 21 |
Journal | Image and vision computing |
Volume | 32 |
Issue number | 10 |
DOIs | |
Publication status | Published - Oct 2014 |
Keywords
- EWI-25807
- HMI-HF: Human Factors
- RGB-D
- Subspace learning
- Online learning
- IR-95240
- EC Grant Agreement nr.: FP7/2007-2013
- Fusion of orientation appearance models
- Face analysis
- Rigid object tracking
- METIS-309939
- EC Grant Agreement nr.: FP7/288235