Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DECAR: Deep Clustering for learning general-purpose Audio Representations (2110.08895v4)

Published 17 Oct 2021 in cs.SD, cs.CL, and eess.AS

Abstract: We introduce DECAR, a self-supervised pre-training approach for learning general-purpose audio representations. Our system is based on clustering: it utilizes an offline clustering step to provide target labels that act as pseudo-labels for solving a prediction task. We develop on top of recent advances in self-supervised learning for computer vision and design a lightweight, easy-to-use self-supervised pre-training scheme. We pre-train DECAR embeddings on a balanced subset of the large-scale Audioset dataset and transfer those representations to 9 downstream classification tasks, including speech, music, animal sounds, and acoustic scenes. Furthermore, we conduct ablation studies identifying key design choices and also make all our code and pre-trained models publicly available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sreyan Ghosh (46 papers)
  2. Sandesh V Katta (2 papers)
  3. Ashish Seth (22 papers)
  4. S. Umesh (24 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.