From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild

Akshay Asthana, Stefanos Zafeiriou, Georgios Tzimiropoulos, Shiyang Cheng, Maja Pantic

    Research output: Contribution to journalArticleAcademicpeer-review

    38 Citations (Scopus)
    2 Downloads (Pure)

    Abstract

    We propose a face alignment framework that relies on the texture model generated by the responses of discriminatively trained part-based filters. Unlike standard texture models built from pixel intensities or responses generated by generic filters (e.g. Gabor), our framework has two important advantages. First, by virtue of discriminative training, invariance to external variations (like identity, pose, illumination and expression) is achieved. Second, we show that the responses generated by discriminatively trained filters (or patch-experts) are sparse and can be modeled using a very small number of parameters. As a result, the optimization methods based on the proposed texture model can better cope with unseen variations. We illustrate this point by formulating both part-based and holistic approaches for generic face alignment and show that our framework outperforms the state-of-the-art on multiple‿wild‿ databases. The code and dataset annotations are available for research purposes from http://ibug.doc.ic.ac.uk/resources.
    Original languageEnglish
    Pages (from-to)1312-1320
    Number of pages9
    JournalIEEE transactions on pattern analysis and machine intelligence
    Volume37
    Issue number6
    DOIs
    Publication statusPublished - 1 Jun 2015

    Keywords

    • HMI-HF: Human Factors
    • Constrained local models
    • EC Grant Agreement nr.: FP7/288235
    • EC Grant Agreement nr.: FP7/2007-2013
    • Facial landmark detection
    • Active appearance models
    • Face alignment
    • n/a OA procedure

    Fingerprint

    Dive into the research topics of 'From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild'. Together they form a unique fingerprint.

    Cite this