Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training Noisy Single-Channel Speech Separation With Noisy Oracle Sources: A Large Gap and A Small Step (2010.12430v2)

Published 23 Oct 2020 in eess.AS and cs.SD

Abstract: As the performance of single-channel speech separation systems has improved, there has been a desire to move to more challenging conditions than the clean, near-field speech that initial systems were developed on. When training deep learning separation models, a need for ground truth leads to training on synthetic mixtures. As such, training in noisy conditions requires either using noise synthetically added to clean speech, preventing the use of in-domain data for a noisy-condition task, or training using mixtures of noisy speech, requiring the network to additionally separate the noise. We demonstrate the relative inseparability of noise and that this noisy speech paradigm leads to significant degradation of system performance. We also propose an SI-SDR-inspired training objective that tries to exploit the inseparability of noise to implicitly partition the signal and discount noise separation errors, enabling the training of better separation systems with noisy oracle sources.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Matthew Maciejewski (9 papers)
  2. Jing Shi (123 papers)
  3. Shinji Watanabe (416 papers)
  4. Sanjeev Khudanpur (74 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.