Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On deep speaker embeddings for text-independent speaker recognition (1804.10080v1)

Published 26 Apr 2018 in cs.SD, cs.CL, eess.AS, and stat.ML

Abstract: We investigate deep neural network performance in the textindependent speaker recognition task. We demonstrate that using angular softmax activation at the last classification layer of a classification neural network instead of a simple softmax activation allows to train a more generalized discriminative speaker embedding extractor. Cosine similarity is an effective metric for speaker verification in this embedding space. We also address the problem of choosing an architecture for the extractor. We found that deep networks with residual frame level connections outperform wide but relatively shallow architectures. This paper also proposes several improvements for previous DNN-based extractor systems to increase the speaker recognition accuracy. We show that the discriminatively trained similarity metric learning approach outperforms the standard LDA-PLDA method as an embedding backend. The results obtained on Speakers in the Wild and NIST SRE 2016 evaluation sets demonstrate robustness of the proposed systems when dealing with close to real-life conditions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sergey Novoselov (13 papers)
  2. Andrey Shulipa (5 papers)
  3. Ivan Kremnev (2 papers)
  4. Alexandr Kozlov (4 papers)
  5. Vadim Shchemelinin (3 papers)
Citations (60)

Summary

We haven't generated a summary for this paper yet.