Multimodal Speaker Diarization

Athanasios Noulas*, Gwenn Englebienne, Ben J.A. Kröse

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

50 Citations (Scopus)

Abstract

We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization.

Original languageEnglish
Article number5728824
Pages (from-to)79-93
Number of pages15
JournalIEEE transactions on pattern analysis and machine intelligence
Volume34
Issue number1
DOIs
Publication statusPublished - 2012
Externally publishedYes

Keywords

  • Audiovisual fusion
  • Dynamic Bayesian networks
  • Speaker diarization
  • n/a OA procedure

Fingerprint

Dive into the research topics of 'Multimodal Speaker Diarization'. Together they form a unique fingerprint.

Cite this