Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Microsoft System for VoxCeleb Speaker Recognition Challenge 2022 (2209.11266v1)

Published 22 Sep 2022 in cs.SD

Abstract: In this report, we describe our submitted system for track 2 of the VoxCeleb Speaker Recognition Challenge 2022 (VoxSRC-22). We fuse a variety of good-performing models ranging from supervised models to self-supervised learning(SSL) pre-trained models. The supervised models, trained using VoxCeleb-2 dev data, consist of ECAPA-TDNN and Res2Net in a very deep structure. The SSL pre-trained models, wav2vec and wavLM, are trained using large scale unlabeled speech data up to million hours. These models are cascaded with ECAPA-TDNN and further fine-tuned in a supervised fashion to extract the speaker representations. All 13 models are applied with score normalization and calibration and then fused into the the submitted system. We also explore the audio quality measures in the calibration stage such as duration, SNR, T60, and MOS. The best submitted system achieves 0.073 in minDCF and 1.436% in EER on the VoxSRC-22 evaluation set.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Gang Liu (177 papers)
  2. Tianyan Zhou (11 papers)
  3. Yong Zhao (194 papers)
  4. Yu Wu (196 papers)
  5. Zhuo Chen (319 papers)
  6. Yao Qian (37 papers)
  7. Jian Wu (314 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.