Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TalTech-IRIT-LIS Speaker and Language Diarization Systems for DISPLACE 2024 (2407.12743v1)

Published 17 Jul 2024 in eess.AS

Abstract: This paper describes the submissions of team TalTech-IRIT-LIS to the DISPLACE 2024 challenge. Our team participated in the speaker diarization and language diarization tracks of the challenge. In the speaker diarization track, our best submission was an ensemble of systems based on the pyannote.audio speaker diarization pipeline utilizing powerset training and our recently proposed PixIT method that performs joint diarization and speech separation. We improve upon PixIT by using the separation outputs for speaker embedding extraction. Our ensemble achieved a diarization error rate of 27.1% on the evaluation dataset. In the language diarization track, we fine-tuned a pre-trained Wav2Vec2-BERT language embedding model on in-domain data, and clustered short segments using AHC and VBx, based on similarity scores from LDA/PLDA. This led to a language diarization error rate of 27.6% on the evaluation data. Both results were ranked first in their respective challenge tracks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Joonas Kalda (3 papers)
  2. Tanel Alumäe (14 papers)
  3. Martin Lebourdais (4 papers)
  4. Hervé Bredin (18 papers)
  5. Séverin Baroudi (1 paper)
  6. Ricard Marxer (21 papers)
Citations (2)