Transcribing lectures is a challenging task, both in acoustic and in language modeling. In this work, we present our first results on the automatic transcription of lectures from the TED corpus, recently released by ELRA and LDC. In particular, we concentrated our effort on language modeling. Baseline acoustic and language models were developed using respectively 8 hours of TED transcripts and various types of texts: conference proceedings, lecture transcripts, and conversational speech transcripts. Then, adaptation of the language model to single speakers was investigated by exploiting different kinds of information: automatic transcripts of the talk, the title of the talk, the abstract and, finally, the paper. In the last case, a 39.2% WER was achieved.
|Publication status||Published - 2003|
|Event||IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2003 - Hong Kong Exhibition and Convention Centre, Hong Kong, Hong Kong|
Duration: 6 Apr 2003 → 10 Apr 2003
|Other||IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2003|
|Period||6/04/03 → 10/04/03|