FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge

Michel F. Valstar, Enrique Sanchez-Lozano, Jeffrey F. Cohn, Laszlo A. Jeni, Jeffrey M. Girard, Zheng Zhang, Lijun Yin, Maja Pantic

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

36 Citations (Scopus)

Abstract

The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges.

Original languageEnglish
Title of host publication12th IEEE International Conference on Automatic Face & Gesture Recognition 2017
Place of PublicationPiscataway, NJ
PublisherIEEE
Pages839-847
Number of pages9
ISBN (Electronic)978-1-5090-4023-0
ISBN (Print)978-1-5090-4024-7
DOIs
Publication statusPublished - 28 Jun 2017
Externally publishedYes
Event12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017 - Washington, United States
Duration: 30 May 20173 Jun 2017
Conference number: 12
http://www.fg2017.org/

Conference

Conference12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017
Abbreviated titleFG
CountryUnited States
CityWashington
Period30/05/173/06/17
Internet address

Fingerprint

Cameras
Gesture recognition
Benchmarking
Face recognition
Availability

Cite this

Valstar, M. F., Sanchez-Lozano, E., Cohn, J. F., Jeni, L. A., Girard, J. M., Zhang, Z., ... Pantic, M. (2017). FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge. In 12th IEEE International Conference on Automatic Face & Gesture Recognition 2017 (pp. 839-847). [7961830] Piscataway, NJ: IEEE. https://doi.org/10.1109/FG.2017.107
Valstar, Michel F. ; Sanchez-Lozano, Enrique ; Cohn, Jeffrey F. ; Jeni, Laszlo A. ; Girard, Jeffrey M. ; Zhang, Zheng ; Yin, Lijun ; Pantic, Maja. / FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge. 12th IEEE International Conference on Automatic Face & Gesture Recognition 2017. Piscataway, NJ : IEEE, 2017. pp. 839-847
@inproceedings{f928299c2de54b369dca934744e41330,
title = "FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge",
abstract = "The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges.",
author = "Valstar, {Michel F.} and Enrique Sanchez-Lozano and Cohn, {Jeffrey F.} and Jeni, {Laszlo A.} and Girard, {Jeffrey M.} and Zheng Zhang and Lijun Yin and Maja Pantic",
year = "2017",
month = "6",
day = "28",
doi = "10.1109/FG.2017.107",
language = "English",
isbn = "978-1-5090-4024-7",
pages = "839--847",
booktitle = "12th IEEE International Conference on Automatic Face & Gesture Recognition 2017",
publisher = "IEEE",
address = "United States",

}

Valstar, MF, Sanchez-Lozano, E, Cohn, JF, Jeni, LA, Girard, JM, Zhang, Z, Yin, L & Pantic, M 2017, FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge. in 12th IEEE International Conference on Automatic Face & Gesture Recognition 2017., 7961830, IEEE, Piscataway, NJ, pp. 839-847, 12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017, Washington, United States, 30/05/17. https://doi.org/10.1109/FG.2017.107

FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge. / Valstar, Michel F.; Sanchez-Lozano, Enrique; Cohn, Jeffrey F.; Jeni, Laszlo A.; Girard, Jeffrey M.; Zhang, Zheng; Yin, Lijun; Pantic, Maja.

12th IEEE International Conference on Automatic Face & Gesture Recognition 2017. Piscataway, NJ : IEEE, 2017. p. 839-847 7961830.

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge

AU - Valstar, Michel F.

AU - Sanchez-Lozano, Enrique

AU - Cohn, Jeffrey F.

AU - Jeni, Laszlo A.

AU - Girard, Jeffrey M.

AU - Zhang, Zheng

AU - Yin, Lijun

AU - Pantic, Maja

PY - 2017/6/28

Y1 - 2017/6/28

N2 - The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges.

AB - The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges.

UR - http://www.scopus.com/inward/record.url?scp=85026294507&partnerID=8YFLogxK

U2 - 10.1109/FG.2017.107

DO - 10.1109/FG.2017.107

M3 - Conference contribution

SN - 978-1-5090-4024-7

SP - 839

EP - 847

BT - 12th IEEE International Conference on Automatic Face & Gesture Recognition 2017

PB - IEEE

CY - Piscataway, NJ

ER -

Valstar MF, Sanchez-Lozano E, Cohn JF, Jeni LA, Girard JM, Zhang Z et al. FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge. In 12th IEEE International Conference on Automatic Face & Gesture Recognition 2017. Piscataway, NJ: IEEE. 2017. p. 839-847. 7961830 https://doi.org/10.1109/FG.2017.107