Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization (2404.09956v4)

Published 15 Apr 2024 in cs.SD, cs.AI, cs.CL, and eess.AS

Abstract: Generative multimodal content is increasingly prevalent in much of the content creation arena, as it has the potential to allow artists and media personnel to create pre-production mockups by quickly bringing their ideas to life. The generation of audio from text prompts is an important aspect of such processes in the music and film industry. Many of the recent diffusion-based text-to-audio models focus on training increasingly sophisticated diffusion models on a large set of datasets of prompt-audio pairs. These models do not explicitly focus on the presence of concepts or events and their temporal ordering in the output audio with respect to the input prompt. Our hypothesis is focusing on how these aspects of audio generation could improve audio generation performance in the presence of limited data. As such, in this work, using an existing text-to-audio model Tango, we synthetically create a preference dataset where each prompt has a winner audio output and some loser audio outputs for the diffusion model to learn from. The loser outputs, in theory, have some concepts from the prompt missing or in an incorrect order. We fine-tune the publicly available Tango text-to-audio model using diffusion-DPO (direct preference optimization) loss on our preference dataset and show that it leads to improved audio output over Tango and AudioLDM2, in terms of both automatic- and manual-evaluation metrics.

Enhancing Diffusion-based Text-to-Audio Generative Models through Direct Preference Optimization-based Alignment

Introduction

In the evolving field of generative AI, particularly within the multimedia content creation domain, the capability to convert text prompts into high-fidelity audio has immense applications, from aiding in pre-production mockups to supporting diverse creative endeavors. One core challenge in this area is the development of models that can accurately translate textual descriptions into coherent and contextually appropriate audio outputs. The paper by Majumder et al. addresses this challenge by presenting an innovative approach that leverages Direct Preference Optimization (DPO) for refining the performance of text-to-audio generative models, specifically through the enhancement of the Tango text-to-audio model.

Background and Related Work

Recent developments in text-to-audio generation have demonstrated promising results with models like AudioLDM and Tango, leveraging diffusion architectures to generate audio from text. Notably, the introduction of AudioLM has pushed the boundaries further by integrating semantic tokens derived from audio prompts into the generation process. However, these models often struggle with ensuring the presence and correct temporal ordering of described concepts or events in the generated audio, especially in data-constrained training environments. To address this, the paper builds on the recent success in aligning model outputs with human preferences using DPO, a method previously applied within LLM training, to the domain of audio generation.

Preference Dataset Generation

A pivotal contribution of this work is the creation of a novel preference dataset, termed Audio-alpaca, designed specifically for text-to-audio generation. This dataset comprises pairs of textual prompts and corresponding audios categorized into preferred (winner) and less desired (loser) outputs. These pairings are systematically generated through strategies that involve perturbing text prompts and employing adversarial filtering to produce audio variations. Through manual and automatic thresholding on model scores (e.g., CLAP scores) for these audios, the paper ensures the dataset effectively reflects preferences indicative of better alignment with human expectations.

Model Development and Evaluation

The augmented Tango model, dubbed Tango 2, undergoes fine-tuning with the DPO-diffusion loss on the Audio-alpaca dataset. Empirical evaluations showcase Tango 2’s superiority over its predecessor and another leading model, AudioLDM2, across a spectrum of automatic and manual assessment metrics. Notably, Tango 2 outperforms in terms of CLAP score, which directly measures the semantic correspondence between the audio output and the input prompt. This outcome underlines the effectiveness of the preference-based optimization approach in enhancing the model's ability to generate contextually and semantically aligned audio content.

Implications and Future Directions

The paper by Majumder et al. not only presents a significant leap in the text-to-audio generation space by introducing a preference-optimized model but also contributes a rich dataset that could serve as a foundation for future research. The success of the DPO-driven fine-tuning methodology in this domain opens up new avenues for exploring preference-aligned generative models across various content formats. Looking ahead, this approach may inspire further innovations in multimodal content generation, potentially leading to more intuitive and human-aligned AI tools for creative expression.

In conclusion, the incorporation of Direct Preference Optimization into the text-to-audio generation process represents a notable advance in the field. By more closely aligning generated audio with the semantic and contextual nuances of textual prompts, models like Tango 2 hold the promise of significantly enhancing the quality and utility of AI-generated audio content across a range of applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Improving Image Generation with Better Captions. https://api.semanticscholar.org/CorpusID:264403242
  2. Audiolm: a language modeling approach to audio generation. IEEE/ACM Transactions on Audio, Speech, and Language Processing (2023).
  3. Scaling Instruction-Finetuned Language Models. https://doi.org/10.48550/ARXIV.2210.11416
  4. w2v-BERT: Combining Contrastive Learning and Masked Language Modeling for Self-Supervised Speech Pre-Training. In IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2021, Cartagena, Colombia, December 13-17, 2021. IEEE, 244–250. https://doi.org/10.1109/ASRU51503.2021.9688253
  5. Text-to-audio generation using instruction-tuned llm and latent diffusion model. arXiv preprint arXiv:2304.13731 (2023).
  6. Text-to-Audio Generation using Instruction Tuned LLM and Latent Diffusion Model. arXiv preprint arXiv:2304.13731 (2023).
  7. ImageBind: One Embedding Space To Bind Them All. In CVPR.
  8. Efficient Diffusion Training via Min-SNR Weighting Strategy. arXiv:2303.09556 [cs.CV]
  9. Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models. arXiv preprint arXiv:2301.12661 (2023).
  10. Image-to-Image Translation with Conditional Adversarial Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), 5967–5976.
  11. Diederik P. Kingma and Max Welling. 2013. Auto-Encoding Variational Bayes. CoRR abs/1312.6114 (2013).
  12. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. Advances in Neural Information Processing Systems 33 (2020), 17022–17033.
  13. Decoupling Magnitude and Phase Estimation with Deep ResUNet for Music Source Separation. In International Society for Music Information Retrieval Conference.
  14. Audiogen: Textually guided audio generation. arXiv preprint arXiv:2209.15352 (2022).
  15. Towards general text embeddings with multi-stage contrastive learning. arXiv preprint arXiv:2308.03281 (2023).
  16. BATON: Aligning Text-to-Audio Model with Human Preference Feedback. arXiv:2402.00744 [cs.SD]
  17. Audioldm: Text-to-audio generation with latent diffusion models. arXiv preprint arXiv:2301.12503 (2023).
  18. AudioLDM: Text-to-Audio Generation with Latent Diffusion Models. ArXiv abs/2301.12503 (2023).
  19. AudioLDM 2: Learning holistic audio generation with self-supervised pretraining. arXiv preprint arXiv:2308.05734 (2023).
  20. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017).
  21. Mustango: Toward Controllable Text-to-Music Generation. arXiv:2311.08355 [eess.AS]
  22. OpenAI. 2023a. DALL·E 2. https://openai.com/dall-e-2
  23. OpenAI. 2023b. GPT-4. https://openai.com/gpt-4
  24. OpenAI. 2023c. Introducing ChatGPT. https://openai.com/blog/chatgpt
  25. Language Models are Unsupervised Multitask Learners. (2019).
  26. Direct Preference Optimization: Your Language Model is Secretly a Reward Model. arXiv:2305.18290 [cs.LG]
  27. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10684–10695.
  28. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Nassir Navab, Joachim Hornegger, William M. Wells, and Alejandro F. Frangi (Eds.). Springer International Publishing, Cham, 234–241.
  29. Denoising Diffusion Implicit Models. ArXiv abs/2010.02502 (2020).
  30. Learning from Between-class Examples for Deep Sound Recognition. CoRR abs/1711.10282 (2017). arXiv:1711.10282 http://arxiv.org/abs/1711.10282
  31. Audiobox: Unified Audio Generation with Natural Language Prompts. arXiv:2312.15821 [cs.SD]
  32. Diffusion Model Alignment Using Direct Preference Optimization. arXiv:2311.12908 [cs.CV]
  33. Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 1–5.
  34. SoundStream: An End-to-End Neural Audio Codec. IEEE ACM Trans. Audio Speech Lang. Process. 30 (2022), 495–507. https://doi.org/10.1109/TASLP.2021.3129994
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Navonil Majumder (48 papers)
  2. Chia-Yu Hung (5 papers)
  3. Deepanway Ghosal (33 papers)
  4. Wei-Ning Hsu (76 papers)
  5. Rada Mihalcea (131 papers)
  6. Soujanya Poria (138 papers)
Citations (29)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com