Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interrupted and cascaded permutation invariant training for speech separation (1910.12706v1)

Published 28 Oct 2019 in cs.SD, cs.LG, and eess.AS

Abstract: Permutation Invariant Training (PIT) has long been a stepping stone method for training speech separation model in handling the label ambiguity problem. With PIT selecting the minimum cost label assignments dynamically, very few studies considered the separation problem to be optimizing both the model parameters and the label assignments, but focused on searching for good model architecture and parameters. In this paper, we investigate instead for a given model architecture the various flexible label assignment strategies for training the model, rather than directly using PIT. Surprisingly, we discover a significant performance boost compared to PIT is possible if the model is trained with fixed label assignments and a good set of labels is chosen. With fixed label training cascaded between two sections of PIT, we achieved the state-of-the-art performance on WSJ0-2mix without changing the model architecture at all.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Gene-Ping Yang (7 papers)
  2. Szu-Lin Wu (2 papers)
  3. Yao-Wen Mao (1 paper)
  4. Hung-yi Lee (327 papers)
  5. Lin-Shan Lee (42 papers)
Citations (13)
Youtube Logo Streamline Icon: https://streamlinehq.com