Enhancing Audio Generation Diversity with Visual Information (2403.01278v1)
Abstract: Audio and sound generation has garnered significant attention in recent years, with a primary focus on improving the quality of generated audios. However, there has been limited research on enhancing the diversity of generated audio, particularly when it comes to audio generation within specific categories. Current models tend to produce homogeneous audio samples within a category. This work aims to address this limitation by improving the diversity of generated audio with visual information. We propose a clustering-based method, leveraging visual information to guide the model in generating distinct audio content within each category. Results on seven categories indicate that extra visual input can largely enhance audio generation diversity. Audio samples are available at https://zeyuxie29.github.io/DiverseAudioGeneration.
- “Foley sound synthesis at the dcase 2023 challenge,” arXiv preprint arXiv:2304.12521, 2023.
- “Acoustic scene generation with conditional samplernn,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 925–929.
- “Wespeaker: A research and production oriented speaker embedding learning toolkit,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023, pp. 1–5.
- “Bridging high-quality audio and video via language for sound effects retrieval from visual queries,” arXiv preprint arXiv:2308.09089, 2023.
- “Learning transferable visual models from natural language supervision,” in International conference on machine learning. PMLR, 2021, pp. 8748–8763.
- “Diffsound: Discrete diffusion model for text-to-sound generation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023.
- “Audioldm: Text-to-audio generation with latent diffusion models,” arXiv preprint arXiv:2301.12503, 2023.
- “Audiogen: Textually guided audio generation,” arXiv preprint arXiv:2209.15352, 2022.
- “Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models,” arXiv preprint arXiv:2301.12661, 2023.
- “Conditional sound generation using neural discrete time-frequency representation learning,” in 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2021, pp. 1–6.
- “Text-to-audio generation using instruction-tuned llm and latent diffusion model,” arXiv preprint arXiv:2304.13731, 2023.
- “Hyu submission for the dcase 2023 task 7: Diffusion probabilistic model with adversarial training for foley sound synthesis,” Tech. Rep., Tech. Rep., June, 2023.
- “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- “Efficient diffusion training via min-snr weighting strategy,” arXiv preprint arXiv:2303.09556, 2023.
- “Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis,” Advances in Neural Information Processing Systems, vol. 33, pp. 17022–17033, 2020.
- “Classifier-free diffusion guidance,” arXiv preprint arXiv:2207.12598, 2022.
- “Glide: Towards photorealistic image generation and editing with text-guided diffusion models,” arXiv preprint arXiv:2112.10741, 2021.
- “Fréchet audio distance: A reference-free metric for evaluating music enhancement algorithms.,” in INTERSPEECH, 2019, pp. 2350–2354.
- “Beats: Audio pre-training with acoustic tokenizers,” arXiv preprint arXiv:2212.09058, 2022.