Self-supervised learning of speech representations with Dutch archival data

Research output: Working paperPreprintAcademic

8 Downloads (Pure)

Abstract

This paper explores the use of Dutch archival television broadcast data for self-supervised learning of speech foundation models, specifically wav2vec 2.0. We first study data quality assumptions for pre-training, and show how music, noise and speaker overlap affect SSL convergence and downstream fine-tuning performance. Secondly, we explore effectively pre-processing strategies to convert the noisy broadcast dataset into a qualitative dataset for pre-training, by using Whisper and WhisperX. Thirdly, we compare mono-lingual and multi-lingual pre-training with equivalent amounts of data, and show that mono-lingual pre-training is more robust to out-of-domain data. Lastly, we achieve a state-of-the-art LARGE wav2vec 2.0 model for the Dutch language, by a continuation of pre-training a wav2vec 2.0 XLS-R model checkpoint with our 55k hour archival dataset.
Original languageEnglish
PublisherArXiv.org
Number of pages5
DOIs
Publication statusPublished - 6 Jul 2025

Keywords

  • cs.SD
  • cs.CL
  • cs.LG
  • eess.AS

Fingerprint

Dive into the research topics of 'Self-supervised learning of speech representations with Dutch archival data'. Together they form a unique fingerprint.

Cite this