Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Conformer-Based Self-Supervised Learning for Non-Speech Audio Tasks (2110.07313v3)

Published 14 Oct 2021 in cs.SD, cs.LG, and eess.AS

Abstract: Representation learning from unlabeled data has been of major interest in artificial intelligence research. While self-supervised speech representation learning has been popular in the speech research community, very few works have comprehensively analyzed audio representation learning for non-speech audio tasks. In this paper, we propose a self-supervised audio representation learning method and apply it to a variety of downstream non-speech audio tasks. We combine the well-known wav2vec 2.0 framework, which has shown success in self-supervised learning for speech tasks, with parameter-efficient conformer architectures. Our self-supervised pre-training can reduce the need for labeled data by two-thirds. On the AudioSet benchmark, we achieve a mean average precision (mAP) score of 0.415, which is a new state-of-the-art on this dataset through audio-only self-supervised learning. Our fine-tuned conformers also surpass or match the performance of previous systems pre-trained in a supervised way on several downstream tasks. We further discuss the important design considerations for both pre-training and fine-tuning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Sangeeta Srivastava (7 papers)
  2. Yun Wang (229 papers)
  3. Andros Tjandra (39 papers)
  4. Anurag Kumar (118 papers)
  5. Chunxi Liu (20 papers)
  6. Kritika Singh (9 papers)
  7. Yatharth Saraf (21 papers)
Citations (22)