Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Single Channel Speech Enhancement Using Temporal Convolutional Recurrent Neural Networks (2002.00319v1)

Published 2 Feb 2020 in cs.SD, cs.LG, and eess.AS

Abstract: In recent decades, neural network based methods have significantly improved the performace of speech enhancement. Most of them estimate time-frequency (T-F) representation of target speech directly or indirectly, then resynthesize waveform using the estimated T-F representation. In this work, we proposed the temporal convolutional recurrent network (TCRN), an end-to-end model that directly map noisy waveform to clean waveform. The TCRN, which is combined convolution and recurrent neural network, is able to efficiently and effectively leverage short-term ang long-term information. Futuremore, we present the architecture that repeatedly downsample and upsample speech during forward propagation. We show that our model is able to improve the performance of model, compared with existing convolutional recurrent networks. Futuremore, We present several key techniques to stabilize the training process. The experimental results show that our model consistently outperforms existing speech enhancement approaches, in terms of speech intelligibility and quality.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jingdong Li (4 papers)
  2. Hui Zhang (405 papers)
  3. Xueliang Zhang (39 papers)
  4. Changliang Li (11 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.