Abstract
Decoders that make use of token-passing restrict their search space by various types of token pruning. With use of the Language Model Look-Ahead (LMLA) technique it is possible to increase the number of tokens that can be pruned without loss of decoding precision. Unfortunately, for token passing decoders that use single static pronunciation prefix trees, full n-gram LMLA increases the needed number of language model probability calculations considerably. In this paper a method for applying full n-gram LMLA in a decoder with a single static pronunciation tree is introduced. The experiments show that this method improves the speed of the decoder without an increase of search errors.
Original language | English |
---|---|
Title of host publication | Proceedings of Interspeech |
Place of Publication | Brisbane, Australia |
Publisher | International Speech Communication Association (ISCA) |
Pages | 91 |
Number of pages | 4 |
Publication status | Published - 22 Sep 2008 |
Event | 9th Annual Conference of the International Speech Communication Association, INTERSPEECH 2008 - Brisbane, Australia Duration: 22 Sep 2008 → 26 Sep 2008 Conference number: 9 |
Publication series
Name | |
---|---|
Publisher | International Speech Communication Association |
Number | 412 |
ISSN (Print) | 1990-9772 |
Conference
Conference | 9th Annual Conference of the International Speech Communication Association, INTERSPEECH 2008 |
---|---|
Abbreviated title | INTERSPEECH |
Country/Territory | Australia |
City | Brisbane |
Period | 22/09/08 → 26/09/08 |
Keywords
- HMI-SLT: Speech and Language Technology
- Language Model Look-Ahead
- Language modeling
- EC Grant Agreement nr.: FP6/027413
- METIS-255484
- IR-65373
- Automatic Speech Recognition
- Decoding
- EWI-15021