End-to-end visual speech recognition with LSTMS

Stavros Petridis*, Zuwei Li, Maja Pantic

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    86 Citations (Scopus)

    Abstract

    Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on joint learning of features and classification is very limited. In this work, we present an end-to-end visual speech recognition system based on Long-Short Memory (LSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and perform classification and also achieves state-of-the-art performance in visual speech classification. The model consists of two streams which extract features directly from the mouth and difference images, respectively. The temporal dynamics in each stream are modelled by an LSTM and the fusion of the two streams takes place via a Bidirectional LSTM (BLSTM). An absolute improvement of 9.7% over the base line is reported on the OuluVS2 database, and 1.5% on the CUAVE database when compared with other methods which use a similar visual front-end.

    Original languageEnglish
    Title of host publication2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - Proceedings
    PublisherIEEE
    Pages2592-2596
    Number of pages5
    ISBN (Electronic)9781509041176
    DOIs
    Publication statusPublished - 16 Jun 2017
    Event42nd IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - New Orleans, United States
    Duration: 5 Mar 20179 Mar 2017
    Conference number: 42
    http://www.ieee-icassp2017.org/

    Conference

    Conference42nd IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017
    Abbreviated titleICASSP
    Country/TerritoryUnited States
    CityNew Orleans
    Period5/03/179/03/17
    Internet address

    Keywords

    • Deep Networks
    • End-to-End Training
    • Lipreading
    • Long-Short Term Recurrent Neural Networks
    • Visual Speech Recognition

    Fingerprint

    Dive into the research topics of 'End-to-end visual speech recognition with LSTMS'. Together they form a unique fingerprint.

    Cite this