Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Noise Robustness In Speaker Identification Using A Two-Stage Attention Model (1909.11200v2)

Published 24 Sep 2019 in eess.AS, cs.AI, cs.CL, and cs.SD

Abstract: While the use of deep neural networks has significantly boosted speaker recognition performance, it is still challenging to separate speakers in poor acoustic environments. To improve robustness of speaker recognition system performance in noise, a novel two-stage attention mechanism which can be used in existing architectures such as Time Delay Neural Networks (TDNNs) and Convolutional Neural Networks (CNNs) is proposed. Noise is known to often mask important information in both time and frequency domain. The proposed mechanism allows the models to concentrate on reliable time/frequency components of the signal. The proposed approach is evaluated using the Voxceleb1 dataset, which aims at assessment of speaker recognition in real world situations. In addition three types of noise at different signal-noise-ratios (SNRs) were added for this work. The proposed mechanism is compared with three strong baselines: X-vectors, Attentive X-vector, and Resnet-34. Results on both identification and verification tasks show that the two-stage attention mechanism consistently improves upon these for all noise conditions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yanpei Shi (12 papers)
  2. Qiang Huang (50 papers)
  3. Thomas Hain (58 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.