Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LAFMA: A Latent Flow Matching Model for Text-to-Audio Generation (2406.08203v1)

Published 12 Jun 2024 in eess.AS and cs.SD

Abstract: Recently, the application of diffusion models has facilitated the significant development of speech and audio generation. Nevertheless, the quality of samples generated by diffusion models still needs improvement. And the effectiveness of the method is accompanied by the extensive number of sampling steps, leading to an extended synthesis time necessary for generating high-quality audio. Previous Text-to-Audio (TTA) methods mostly used diffusion models in the latent space for audio generation. In this paper, we explore the integration of the Flow Matching (FM) model into the audio latent space for audio generation. The FM is an alternative simulation-free method that trains continuous normalization flows (CNF) based on regressing vector fields. We demonstrate that our model significantly enhances the quality of generated audio samples, achieving better performance than prior models. Moreover, it reduces the number of inference steps to ten steps almost without sacrificing performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Wenhao Guan (13 papers)
  2. Kaidi Wang (19 papers)
  3. Wangjin Zhou (6 papers)
  4. Yang Wang (672 papers)
  5. Feng Deng (5 papers)
  6. Hui Wang (371 papers)
  7. Lin Li (329 papers)
  8. Qingyang Hong (29 papers)
  9. Yong Qin (35 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.