Fast N-Gram Language Model Look-Ahead for Decoders With Static Pronunciation Prefix Trees

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    3 Citations (Scopus)
    37 Downloads (Pure)

    Abstract

    Decoders that make use of token-passing restrict their search space by various types of token pruning. With use of the Language Model Look-Ahead (LMLA) technique it is possible to increase the number of tokens that can be pruned without loss of decoding precision. Unfortunately, for token passing decoders that use single static pronunciation prefix trees, full n-gram LMLA increases the needed number of language model probability calculations considerably. In this paper a method for applying full n-gram LMLA in a decoder with a single static pronunciation tree is introduced. The experiments show that this method improves the speed of the decoder without an increase of search errors.
    Original languageEnglish
    Title of host publicationProceedings of Interspeech
    Place of PublicationBrisbane, Australia
    PublisherInternational Speech Communication Association (ISCA)
    Pages91
    Number of pages4
    Publication statusPublished - 22 Sep 2008
    Event9th Annual Conference of the International Speech Communication Association, INTERSPEECH 2008 - Brisbane, Australia
    Duration: 22 Sep 200826 Sep 2008
    Conference number: 9

    Publication series

    Name
    PublisherInternational Speech Communication Association
    Number412
    ISSN (Print)1990-9772

    Conference

    Conference9th Annual Conference of the International Speech Communication Association, INTERSPEECH 2008
    Abbreviated titleINTERSPEECH
    CountryAustralia
    CityBrisbane
    Period22/09/0826/09/08

    Keywords

    • HMI-SLT: Speech and Language Technology
    • Language Model Look-Ahead
    • Language modeling
    • EC Grant Agreement nr.: FP6/027413
    • METIS-255484
    • IR-65373
    • Automatic Speech Recognition
    • Decoding
    • EWI-15021

    Fingerprint Dive into the research topics of 'Fast N-Gram Language Model Look-Ahead for Decoders With Static Pronunciation Prefix Trees'. Together they form a unique fingerprint.

    Cite this