A stabilized adaptive appearance changes model for 3D head tracking

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    9 Citations (Scopus)
    20 Downloads (Pure)

    Abstract

    A simple method is presented for 3D head pose estimation and tracking in monocular image sequences. A generic geometric model is used. The initialization consists of aligning the perspective projection of the geometric model with the subjects head in the initial image. After the initialization, the gray levels from the initial image are mapped onto the visible side of the head model to form a textured object. Only a limited number of points on the object is used allowing real-time performance even on low-end computers. The appearance changes caused by movement in the complex light conditions of a real scene present a big problem for fitting the textured model to the data from new images. Having in mind real human-computer interfaces we propose a simple adaptive appearance changes model that is updated by the measurements from the new images. To stabilize the model we constrain it to some neighborhood of the initial gray values. The neighborhood is defined using some simple heuristics
    Original languageEnglish
    Title of host publicationProceedings of the IEEE ICCV International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems
    EditorsA. Denise Williams
    PublisherIEEE
    Pages175-180
    Number of pages6
    ISBN (Print)0-7695-1074-4
    DOIs
    Publication statusPublished - 2001
    EventIEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems - Vancouver, Canada
    Duration: 13 Jul 200113 Jul 2001

    Conference

    ConferenceIEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems
    Country/TerritoryCanada
    CityVancouver
    Period13/07/0113/07/01

    Fingerprint

    Dive into the research topics of 'A stabilized adaptive appearance changes model for 3D head tracking'. Together they form a unique fingerprint.

    Cite this