Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Improvement of Audio-Text Cross-Modal Representations (2305.01864v3)

Published 3 May 2023 in cs.SD, cs.LG, and eess.AS

Abstract: Recent advances in using LLMs to obtain cross-modal audio-text representations have overcome the limitations of conventional training approaches that use predefined labels. This has allowed the community to make progress in tasks like zero-shot classification, which would otherwise not be possible. However, learning such representations requires a large amount of human-annotated audio-text pairs. In this paper, we study unsupervised approaches to improve the learning framework of such representations with unpaired text and audio. We explore domain-unspecific and domain-specific curation methods to create audio-text pairs that we use to further improve the model. We also show that when domain-specific curation is used in conjunction with a soft-labeled contrastive loss, we are able to obtain significant improvement in terms of zero-shot classification performance on downstream sound event classification or acoustic scene classification tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zhepei Wang (30 papers)
  2. Cem Subakan (35 papers)
  3. Krishna Subramani (9 papers)
  4. Junkai Wu (6 papers)
  5. Tiago Tavares (3 papers)
  6. Fabio Ayres (4 papers)
  7. Paris Smaragdis (60 papers)
Citations (3)