Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Diffiner: A Versatile Diffusion-based Generative Refiner for Speech Enhancement (2210.17287v3)

Published 27 Oct 2022 in eess.AS, cs.LG, and cs.SD

Abstract: Although deep neural network (DNN)-based speech enhancement (SE) methods outperform the previous non-DNN-based ones, they often degrade the perceptual quality of generated outputs. To tackle this problem, we introduce a DNN-based generative refiner, Diffiner, aiming to improve perceptual speech quality pre-processed by an SE method. We train a diffusion-based generative model by utilizing a dataset consisting of clean speech only. Then, our refiner effectively mixes clean parts newly generated via denoising diffusion restoration into the degraded and distorted parts caused by a preceding SE method, resulting in refined speech. Once our refiner is trained on a set of clean speech, it can be applied to various SE methods without additional training specialized for each SE module. Therefore, our refiner can be a versatile post-processing module w.r.t. SE methods and has high potential in terms of modularity. Experimental results show that our method improved perceptual speech quality regardless of the preceding SE methods used.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ryosuke Sawata (8 papers)
  2. Naoki Murata (29 papers)
  3. Yuhta Takida (32 papers)
  4. Toshimitsu Uesaka (17 papers)
  5. Takashi Shibuya (32 papers)
  6. Shusuke Takahashi (31 papers)
  7. Yuki Mitsufuji (127 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.