Abstract
The automatic analysis of human motion from images opens up the way for applications in the domains of security and surveillance, human-computer interaction, animation, retrieval and sports motion analysis. In this dissertation, the focus is on robust and fast human pose recovery and action recognition. The former is a regression task where the aim is to determine the locations of key joints in the human body, given an image of a human figure. The latter is the process of labeling image sequences with action labels, a classification task.
An example-based pose recovery approach is introduced where histograms of oriented gradients (HOG) are used as the image descriptor. From a database containing thousands of HOG-pose pairs, the visually closest examples are selected. Weighted interpolation of the corresponding poses is used to obtain the pose estimate. This approach is fast due to the use of a low-cost distance function. To cope with partial occlusions of the human figure, the normalization and matching of the HOG descriptors was changed from global to the cell level. When occlusion areas in the image are predicted, only part of the descriptor can be used for recovery, thus avoiding adaptation of the database to the occlusion setting.
For the recognition of human actions, simple functions are used to discriminate between two classes after applying a common spatial patterns (CSP) transform on sequences of HOG descriptors. In the transform, the difference in variance between two classes is maximized. Each of the discriminative functions softly votes into the two classes. After evaluation of all pairwise functions, the action class that receives most of the voting mass is the estimated class. By combining the two approaches, actions could be recognized by considering sequences of recovered, rotation-normalized poses. Thanks to this normalization, actions could be recognized from arbitrary viewpoints. By handling occlusions in the pose recovery step, actions could be recognized from image observations where occlusion was simulated.
An example-based pose recovery approach is introduced where histograms of oriented gradients (HOG) are used as the image descriptor. From a database containing thousands of HOG-pose pairs, the visually closest examples are selected. Weighted interpolation of the corresponding poses is used to obtain the pose estimate. This approach is fast due to the use of a low-cost distance function. To cope with partial occlusions of the human figure, the normalization and matching of the HOG descriptors was changed from global to the cell level. When occlusion areas in the image are predicted, only part of the descriptor can be used for recovery, thus avoiding adaptation of the database to the occlusion setting.
For the recognition of human actions, simple functions are used to discriminate between two classes after applying a common spatial patterns (CSP) transform on sequences of HOG descriptors. In the transform, the difference in variance between two classes is maximized. Each of the discriminative functions softly votes into the two classes. After evaluation of all pairwise functions, the action class that receives most of the voting mass is the estimated class. By combining the two approaches, actions could be recognized by considering sequences of recovered, rotation-normalized poses. Thanks to this normalization, actions could be recognized from arbitrary viewpoints. By handling occlusions in the pose recovery step, actions could be recognized from image observations where occlusion was simulated.
Original language | Undefined |
---|---|
Qualification | Doctor of Philosophy |
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 2 Apr 2009 |
Place of Publication | Enschede |
Publisher | |
Print ISBNs | 978-90-365-2810-8 |
DOIs | |
Publication status | Published - 2 Apr 2009 |
Keywords
- Pose recovery
- Human action recognition
- Human motion
- Computer Vision
- Action recognition
- IR-60831
- METIS-263844
- Human pose recovery
- HMI-CI: Computational Intelligence
- EWI-15348