Example-Based Human Pose Recovery under Predicted Partial Occlusions

Ronald Walter Poppe

    Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

    7 Downloads (Pure)


    For human pose recovery, the presence of occlusions due to objects or other persons in the scene remains a difficult problem to cope with. However, recent advances in the area of human detection allow for simultaneous segmentation of humans and the prediction of occluded regions. In this chapter, we present an example-based pose recovery approach where this information is used. We effectively used the grid-based nature of histograms of oriented gradients descriptors to ignore part of the image observation space. This allowed us to recover poses directly, even in the presence of significant occlusions. We evaluated our approach on the HumanEva-I dataset, where we simulated different occlusion conditions. Without occlusion, we obtained relative 3D errors of approximately 69 mm. Our results showed approximately 10% increase in error when 20% of the observation is occluded. When 33% of the observation is occluded, the error is on average 15% higher compared to the observations without occlusions. These results showed that poses can be recovered from partially occluded observations, with a moderate increase in error. To the best of our knowledge, our approach is the first to investigate the effect of partial occlusions in a direct matching approach. Future work is aimed at combining our work with human detection.
    Original languageUndefined
    Title of host publicationInteractive Collaborative Information Systems
    EditorsRobert Babuska, Frans C.A. Groen
    Place of PublicationBerlin
    Number of pages28
    ISBN (Print)978-3-642-11687-2
    Publication statusPublished - Mar 2010

    Publication series

    NameStudies in Computational Intelligence
    PublisherSpringer Verlag


    • IR-70528
    • METIS-270771
    • Occlusion
    • Pose estimation
    • HOG
    • Human pose recovery
    • HMI-CI: Computational Intelligence
    • EWI-17726

    Cite this