Conditional Latent Diffusion-Based Speech Enhancement Via Dual Context Learning
(2501.10052v1)
Published 17 Jan 2025 in cs.SD and eess.AS
Abstract: Recently, the application of diffusion probabilistic models has advanced speech enhancement through generative approaches. However, existing diffusion-based methods have focused on the generation process in high-dimensional waveform or spectral domains, leading to increased generation complexity and slower inference speeds. Additionally, these methods have primarily modelled clean speech distributions, with limited exploration of noise distributions, thereby constraining the discriminative capability of diffusion models for speech enhancement. To address these issues, we propose a novel approach that integrates a conditional latent diffusion model (cLDM) with dual-context learning (DCL). Our method utilizes a variational autoencoder (VAE) to compress mel-spectrograms into a low-dimensional latent space. We then apply cLDM to transform the latent representations of both clean speech and background noise into Gaussian noise by the DCL process, and a parameterized model is trained to reverse this process, conditioned on noisy latent representations and text embeddings. By operating in a lower-dimensional space, the latent representations reduce the complexity of the generation process, while the DCL process enhances the model's ability to handle diverse and unseen noise environments. Our experiments demonstrate the strong performance of the proposed approach compared to existing diffusion-based methods, even with fewer iterative steps, and highlight the superior generalization capability of our models to out-of-domain noise datasets (https://github.com/modelscope/ClearerVoice-Studio).
Summary
The paper introduces cLDM+DCL as its main contribution to enhance speech through a conditional latent diffusion model combined with dual context learning.
It reduces computational complexity by operating in a low-dimensional latent space using a variational autoencoder and leverages text embeddings for noise-speech separation.
Experiments demonstrate superior performance over baselines, particularly for unseen noise environments, validated by metrics like PESQ, ESTOI, SI-SDR, WV-MOS, and DNSMOS.
The paper introduces a novel speech enhancement method using a conditional latent diffusion model (cLDM) with dual-context learning (DCL). The approach addresses limitations in existing diffusion-based methods, such as high computational complexity and insufficient modeling of noise distributions. The cLDM operates in a lower-dimensional latent space, achieved through a variational autoencoder (VAE), to reduce the complexity of the generation process. The DCL scheme enhances the model's ability to handle diverse and unseen noise environments by modeling both clean speech and background noise distributions.
The paper details the system architecture, which includes a VAE, a cLDM, and a vocoder. The VAE encoder projects mel-spectrograms of noisy speech y, clean speech x, and noise n into low-dimensional latent representations zX, zY, and zN, respectively. The cLDM then learns the distributions of zX and zN, guided by a text embedding τ. For inference, the cLDM generates the speech prior zX conditioned on zY and τ, which is then decoded by the VAE decoder and converted back into the waveform domain by the vocoder.
The cLDM employs forward and reverse processes to approximate the conditional data distribution q(z0∣zY,τ) with the learned model distribution pθ(z0∣zY,τ). The forward process gradually transforms the data distribution into a standard Gaussian distribution through a Markov chain, with transition probability defined as:
q(zt∣zt−1)=N(zt;1−βtzt−1,βtI)
q(zt∣z0)=N(zt;αˉtz0,(1−αˉt)ϵ)
where:
zt is the latent variable at time step t
βt is the noise schedule
I is the identity matrix
αˉt:=∏i=1t(1−βi) is the noise level at each step
ϵ∼N(0,I) is the injected noise
The reverse process refines the speech through successive iterations z[0:T−1] based on learned conditional transition distributions:
The DCL scheme trains a shared cLDM to generate both the speech prior zX and the noise prior zN using generated noisy-clean data Dy,x and noisy-noise data Dy,n. The texts "Speech enhancement" and "Background noise estimation" guide the generation process for speech and noise, respectively, and are converted into embeddings using a pre-trained T5 model.
The VAE model consists of an encoder and decoder built with stacked convolutional modules and is retrained on clean speech, noisy speech, and background noise data. BigVGAN is employed as the vocoder to generate speech samples from the enhanced mel-spectrogram and is retrained using only clean speech.
Experiments were conducted using the LibriSpeech corpus for clean speech and the AudioSet corpus for noise data. The training set comprised 360 hours of speech and 250 hours of noise. Five noise types (laughing, gunshot, singing, car engine, and rain) were reserved as unseen noises for testing. Noisy-clean pairs and noisy-noise pairs were generated with varying Signal-to-Noise Ratio (SNR) levels. Performance was evaluated using Perceptual Evaluation of Speech Quality (PESQ), extended short-term objective intelligibility (ESTOI), scale-invariant signal-to-distortion ratio (SI-SDR), Wav2Vec MOS (WV-MOS), and Deep Noise Suppression MOS (DNSMOS). The proposed method, cLDM+DCL, was compared against several baselines, including CDiffuSE, SGMSE+, StoRM, NASE, Conv-TasNet, and MetricGAN+.
Ablation studies examined the impact of the number of reverse process steps T and the DCL scheme. Results showed that cLDM+DCL benefits from an increased number of reverse diffusion steps while maintaining a low real-time factor (RTF). Performance comparisons on seen-noise and unseen-noise test sets demonstrated that cLDM+DCL outperforms other diffusion-based methods, particularly on unseen noises, highlighting the effectiveness of learning noise distributions.
The paper concludes that the cLDM+DCL framework improves speech enhancement by operating in a low-dimensional latent space and effectively handling diverse noise environments through the DCL scheme. Experimental results validate the effectiveness of the approach on both seen and unseen noise conditions, as well as on out-of-domain datasets such as VoiceBank+DEMAND and DNS Challenge 2020.