Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Universal speaker recognition encoders for different speech segments duration (2210.16231v1)

Published 28 Oct 2022 in cs.SD, cs.LG, and eess.AS

Abstract: Creating universal speaker encoders which are robust for different acoustic and speech duration conditions is a big challenge today. According to our observations systems trained on short speech segments are optimal for short phrase speaker verification and systems trained on long segments are superior for long segments verification. A system trained simultaneously on pooled short and long speech segments does not give optimal verification results and usually degrades both for short and long segments. This paper addresses the problem of creating universal speaker encoders for different speech segments duration. We describe our simple recipe for training universal speaker encoder for any type of selected neural network architecture. According to our evaluation results of wav2vec-TDNN based systems obtained for NIST SRE and VoxCeleb1 benchmarks the proposed universal encoder provides speaker verification improvements in case of different enroLLMent and test speech segment duration. The key feature of the proposed encoder is that it has the same inference time as the selected neural network architecture.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Sergey Novoselov (13 papers)
  2. Vladimir Volokhov (5 papers)
  3. Galina Lavrentyeva (12 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.