End-to-end visual speech recognition with LSTMS

Stavros Petridis, Zuwei Li, Maja Pantic

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

25 Citations (Scopus)

Abstract

Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on joint learning of features and classification is very limited. In this work, we present an end-to-end visual speech recognition system based on Long-Short Memory (LSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and perform classification and also achieves state-of-the-art performance in visual speech classification. The model consists of two streams which extract features directly from the mouth and difference images, respectively. The temporal dynamics in each stream are modelled by an LSTM and the fusion of the two streams takes place via a Bidirectional LSTM (BLSTM). An absolute improvement of 9.7% over the base line is reported on the OuluVS2 database, and 1.5% on the CUAVE database when compared with other methods which use a similar visual front-end.

Original languageEnglish
Title of host publication2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - Proceedings
PublisherIEEE
Pages2592-2596
Number of pages5
ISBN (Electronic)9781509041176
DOIs
Publication statusPublished - 16 Jun 2017
Event42nd IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - New Orleans, United States
Duration: 5 Mar 20179 Mar 2017
Conference number: 42
http://www.ieee-icassp2017.org/

Conference

Conference42nd IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017
Abbreviated titleICASSP
CountryUnited States
CityNew Orleans
Period5/03/179/03/17
Internet address

Fingerprint

Speech recognition
Data storage equipment
Feature extraction
Fusion reactions
Pixels

Keywords

  • Deep Networks
  • End-to-End Training
  • Lipreading
  • Long-Short Term Recurrent Neural Networks
  • Visual Speech Recognition

Cite this

Petridis, S., Li, Z., & Pantic, M. (2017). End-to-end visual speech recognition with LSTMS. In 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - Proceedings (pp. 2592-2596). [7952625] IEEE. https://doi.org/10.1109/ICASSP.2017.7952625
Petridis, Stavros ; Li, Zuwei ; Pantic, Maja. / End-to-end visual speech recognition with LSTMS. 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - Proceedings. IEEE, 2017. pp. 2592-2596
@inproceedings{f2ec7e83b30342a3983cea304b4b8732,
title = "End-to-end visual speech recognition with LSTMS",
abstract = "Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on joint learning of features and classification is very limited. In this work, we present an end-to-end visual speech recognition system based on Long-Short Memory (LSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and perform classification and also achieves state-of-the-art performance in visual speech classification. The model consists of two streams which extract features directly from the mouth and difference images, respectively. The temporal dynamics in each stream are modelled by an LSTM and the fusion of the two streams takes place via a Bidirectional LSTM (BLSTM). An absolute improvement of 9.7{\%} over the base line is reported on the OuluVS2 database, and 1.5{\%} on the CUAVE database when compared with other methods which use a similar visual front-end.",
keywords = "Deep Networks, End-to-End Training, Lipreading, Long-Short Term Recurrent Neural Networks, Visual Speech Recognition",
author = "Stavros Petridis and Zuwei Li and Maja Pantic",
year = "2017",
month = "6",
day = "16",
doi = "10.1109/ICASSP.2017.7952625",
language = "English",
pages = "2592--2596",
booktitle = "2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - Proceedings",
publisher = "IEEE",
address = "United States",

}

Petridis, S, Li, Z & Pantic, M 2017, End-to-end visual speech recognition with LSTMS. in 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - Proceedings., 7952625, IEEE, pp. 2592-2596, 42nd IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017, New Orleans, United States, 5/03/17. https://doi.org/10.1109/ICASSP.2017.7952625

End-to-end visual speech recognition with LSTMS. / Petridis, Stavros; Li, Zuwei; Pantic, Maja.

2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - Proceedings. IEEE, 2017. p. 2592-2596 7952625.

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - End-to-end visual speech recognition with LSTMS

AU - Petridis, Stavros

AU - Li, Zuwei

AU - Pantic, Maja

PY - 2017/6/16

Y1 - 2017/6/16

N2 - Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on joint learning of features and classification is very limited. In this work, we present an end-to-end visual speech recognition system based on Long-Short Memory (LSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and perform classification and also achieves state-of-the-art performance in visual speech classification. The model consists of two streams which extract features directly from the mouth and difference images, respectively. The temporal dynamics in each stream are modelled by an LSTM and the fusion of the two streams takes place via a Bidirectional LSTM (BLSTM). An absolute improvement of 9.7% over the base line is reported on the OuluVS2 database, and 1.5% on the CUAVE database when compared with other methods which use a similar visual front-end.

AB - Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on joint learning of features and classification is very limited. In this work, we present an end-to-end visual speech recognition system based on Long-Short Memory (LSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and perform classification and also achieves state-of-the-art performance in visual speech classification. The model consists of two streams which extract features directly from the mouth and difference images, respectively. The temporal dynamics in each stream are modelled by an LSTM and the fusion of the two streams takes place via a Bidirectional LSTM (BLSTM). An absolute improvement of 9.7% over the base line is reported on the OuluVS2 database, and 1.5% on the CUAVE database when compared with other methods which use a similar visual front-end.

KW - Deep Networks

KW - End-to-End Training

KW - Lipreading

KW - Long-Short Term Recurrent Neural Networks

KW - Visual Speech Recognition

UR - http://www.scopus.com/inward/record.url?scp=85023746549&partnerID=8YFLogxK

U2 - 10.1109/ICASSP.2017.7952625

DO - 10.1109/ICASSP.2017.7952625

M3 - Conference contribution

SP - 2592

EP - 2596

BT - 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - Proceedings

PB - IEEE

ER -

Petridis S, Li Z, Pantic M. End-to-end visual speech recognition with LSTMS. In 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - Proceedings. IEEE. 2017. p. 2592-2596. 7952625 https://doi.org/10.1109/ICASSP.2017.7952625