Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Progressive Learning for Stabilizing Label Selection in Speech Separation with Mapping-based Method (2110.10593v2)

Published 20 Oct 2021 in cs.SD, cs.LG, and eess.AS

Abstract: Speech separation has been studied in time domain because of lower latency and higher performance compared to time-frequency domain. The masking-based method has been mostly used in time domain, and the other common method (mapping-based) has been inadequately studied. We investigate the use of the mapping-based method in the time domain and show that it can perform better on a large training set than the masking-based method. We also investigate the frequent label-switching problem in permutation invariant training (PIT), which results in suboptimal training because the labels selected by PIT differ across training epochs. Our experiment results showed that PIT works well in a shallow separation model, and the label switching occurs for a deeper model. We inferred that layer decoupling may be the reason for the frequent label switching. Therefore, we propose a training strategy based on progressive learning. This approach significantly reduced inconsistent label assignment without added computational complexity or training corpus. By combining this training strategy with the mapping-based method, we significantly improved the separation performance compared to the baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Chenyang Gao (6 papers)
  2. Yue Gu (24 papers)
  3. Ivan Marsic (17 papers)

Summary

We haven't generated a summary for this paper yet.