Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-scale speaker embedding-based graph attention networks for speaker diarisation (2110.03361v1)

Published 7 Oct 2021 in eess.AS and cs.AI

Abstract: The objective of this work is effective speaker diarisation using multi-scale speaker embeddings. Typically, there is a trade-off between the ability to recognise short speaker segments and the discriminative power of the embedding, according to the segment length used for embedding extraction. To this end, recent works have proposed the use of multi-scale embeddings where segments with varying lengths are used. However, the scores are combined using a weighted summation scheme where the weights are fixed after the training phase, whereas the importance of segment lengths can differ with in a single session. To address this issue, we present three key contributions in this paper: (1) we propose graph attention networks for multi-scale speaker diarisation; (2) we design scale indicators to utilise scale information of each embedding; (3) we adapt the attention-based aggregation to utilise a pre-computed affinity matrix from multi-scale embeddings. We demonstrate the effectiveness of our method in various datasets where the speaker confusion which constitutes the primary metric drops over 10% in average relative compared to the baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Youngki Kwon (13 papers)
  2. Hee-Soo Heo (30 papers)
  3. Jee-weon Jung (69 papers)
  4. You Jin Kim (14 papers)
  5. Bong-Jin Lee (23 papers)
  6. Joon Son Chung (106 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.