Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-talker Speech Separation with Utterance-level Permutation Invariant Training of Deep Recurrent Neural Networks (1703.06284v2)

Published 18 Mar 2017 in cs.SD, cs.LG, and eess.AS

Abstract: In this paper we propose the utterance-level Permutation Invariant Training (uPIT) technique. uPIT is a practically applicable, end-to-end, deep learning based solution for speaker independent multi-talker speech separation. Specifically, uPIT extends the recently proposed Permutation Invariant Training (PIT) technique with an utterance-level cost function, hence eliminating the need for solving an additional permutation problem during inference, which is otherwise required by frame-level PIT. We achieve this using Recurrent Neural Networks (RNNs) that, during training, minimize the utterance-level separation error, hence forcing separated frames belonging to the same speaker to be aligned to the same output stream. In practice, this allows RNNs, trained with uPIT, to separate multi-talker mixed speech without any prior knowledge of signal duration, number of speakers, speaker identity or gender. We evaluated uPIT on the WSJ0 and Danish two- and three-talker mixed-speech separation tasks and found that uPIT outperforms techniques based on Non-negative Matrix Factorization (NMF) and Computational Auditory Scene Analysis (CASA), and compares favorably with Deep Clustering (DPCL) and the Deep Attractor Network (DANet). Furthermore, we found that models trained with uPIT generalize well to unseen speakers and languages. Finally, we found that a single model, trained with uPIT, can handle both two-speaker, and three-speaker speech mixtures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Morten Kolbæk (8 papers)
  2. Dong Yu (329 papers)
  3. Zheng-Hua Tan (85 papers)
  4. Jesper Jensen (41 papers)
Citations (699)

Summary

  • The paper introduces an utterance-level permutation invariant training (uPIT) technique that resolves speaker tracing and label permutation issues in multi-talker speech separation.
  • It employs deep recurrent neural networks with LSTM architectures to optimize signal reconstruction in the time-frequency domain, achieving up to 10 dB SDR improvement.
  • The approach demonstrates flexibility by effectively handling both two- and three-talker mixtures, outpacing traditional methods in challenging scenarios.

Multi-talker Speech Separation with Utterance-level Permutation Invariant Training of Deep Recurrent Neural Networks

The paper presents a novel approach to solving the challenging problem of speaker-independent multi-talker speech separation using a technique named utterance-level Permutation Invariant Training (uPIT). This work builds upon and extends the previously introduced Permutation Invariant Training (PIT) by incorporating an utterance-level training criterion, effectively addressing both the label permutation problem and the speaker tracing problem. The proposed method utilizes deep Recurrent Neural Networks (RNNs), specifically Long Short-Term Memory (LSTM) networks, to achieve robust separation of mixed speech signals from multiple speakers.

Background and Motivation

The cocktail party problem, a term coined to describe the human ability to focus auditory attention on a single speaker amidst a mixture of noises and competing speakers, has long been a challenging problem for automatic speech separation systems. Traditional approaches, such as those based on Computational Auditory Scene Analysis (CASA) and Non-negative Matrix Factorization (NMF), have shown limited success, particularly in speaker-independent scenarios. Moreover, probabilistic models, while effective under closed-set speaker conditions, fail when the speaker identity is unknown. Recent advancements leveraging deep learning have made significant strides in tackling this problem, yet challenges like the label permutation problem remain.

Contribution

The main contribution of this work is the fully end-to-end deep learning solution for speaker-independent multi-talker speech separation using utterance-level PIT. Unlike its predecessor, frame-level PIT, which requires solving a permutation problem during inference and is prone to speaker tracing errors, uPIT eliminates these issues by optimizing permutation at the utterance level. This allows for a consistent permutation of separated outputs across the entire utterance, simplifying the separation process and ensuring better performance.

Methodology

The separation task aims at recovering individual source signals from a linearly mixed single-microphone signal. The proposed approach carries out separation in the Time-Frequency (T-F) domain using the Short-Time Fourier Transform (STFT). Three types of ideal masks are considered for target creation: Ideal Ratio Mask (IRM), Ideal Amplitude Mask (IAM), and Ideal Phase Sensitive Mask (IPSM).

Deep Recurrent Neural Networks (RNNs): The models leverage LSTM and bi-directional LSTM (BLSTM) RNNs due to their ability to model long-range temporal dependencies. The training criterion minimizes the Mean Squared Error (MSE) between the estimated and true magnitude spectra of the source signals.

Optimization: During training, uPIT optimizes the permutation at the utterance level rather than at each frame or segment, ensuring a stable output order. This significantly reduces errors associated with speaker tracing and yields a more straightforward and efficient separation process during inference.

Experimental Results

The models were rigorously evaluated on the WSJ0-2mix, WSJ0-3mix, and Danish-2mix datasets, demonstrating their efficacy in both closed-condition (CC) and open-condition (OC) scenarios. The results show that uPIT outperforms traditional methods like CASA and NMF, and competes favorably with recent deep learning techniques such as Deep Clustering (DPCL) and Deep Attractor Network (DANet).

Performance Metrics: Evaluations were conducted using the Signal-to-Distortion Ratio (SDR) and Perceptual Evaluation of Speech Quality (PESQ). The models achieved significant improvements in SDR, with uPIT-trained models yielding up to 10 dB SDR improvement for two-talker mixtures and demonstrating strong generalization to unseen speakers and languages.

Flexibility of uPIT: One notable finding is that a single uPIT model can handle varying numbers of speakers without a priori knowledge, proving the approach's flexibility and practical applicability. This was evidenced by a combined two- and three-talker separation task where a single model effectively separated speech from both conditions.

Implications

From a theoretical perspective, uPIT addresses key limitations of previous methods by eliminating the need for frame-level permutation adjustments and providing a robust solution for speaker tracing. Practically, this leads to more reliable and efficient speech separation systems that can be used in a variety of real-world applications such as automatic meeting transcription, multi-party human-machine interaction, and advanced hearing aids.

Future Directions

Future research may explore integrating uPIT with complex-domain separation techniques and multi-channel approaches to further enhance performance. Additionally, leveraging more sophisticated recurrent dropout strategies and curriculum training could yield further improvements. The potential for a universal model that generalizes across multiple speakers, languages, and noise conditions remains an exciting avenue for exploration.

In conclusion, the proposed utterance-level Permutation Invariant Training technique represents a significant advancement in the field of multi-talker speech separation, offering a robust, efficient, and scalable solution to a long-standing challenge.