Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Framewise WaveGAN: High Speed Adversarial Vocoder in Time Domain with Very Low Computational Complexity (2212.04532v2)

Published 8 Dec 2022 in eess.AS, cs.LG, cs.SD, and eess.SP

Abstract: GAN vocoders are currently one of the state-of-the-art methods for building high-quality neural waveform generative models. However, most of their architectures require dozens of billion floating-point operations per second (GFLOPS) to generate speech waveforms in samplewise manner. This makes GAN vocoders still challenging to run on normal CPUs without accelerators or parallel computers. In this work, we propose a new architecture for GAN vocoders that mainly depends on recurrent and fully-connected networks to directly generate the time domain signal in framewise manner. This results in considerable reduction of the computational cost and enables very fast generation on both GPUs and low-complexity CPUs. Experimental results show that our Framewise WaveGAN vocoder achieves significantly higher quality than auto-regressive maximum-likelihood vocoders such as LPCNet at a very low complexity of 1.2 GFLOPS. This makes GAN vocoders more practical on edge and low-power devices.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ahmed Mustafa (14 papers)
  2. Jean-Marc Valin (55 papers)
  3. Jan Büthe (13 papers)
  4. Paris Smaragdis (60 papers)
  5. Mike Goodwin (3 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.