Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations (2305.17191v2)

Published 29 May 2023 in cs.LG, cs.SD, and eess.AS

Abstract: Contrastive self-supervised learning has gained attention for its ability to create high-quality representations from large unlabelled data sets. A key reason that these powerful features enable data-efficient learning of downstream tasks is that they provide augmentation invariance, which is often a useful inductive bias. However, the amount and type of invariances preferred is not known apriori, and varies across different downstream tasks. We therefore propose a multi-task self-supervised framework (MT-SLVR) that learns both variant and invariant features in a parameter-efficient manner. Our multi-task representation provides a strong and flexible feature that benefits diverse downstream tasks. We evaluate our approach on few-shot classification tasks drawn from a variety of audio domains and demonstrate improved classification performance on all of them

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Calum Heggan (4 papers)
  2. Tim Hospedales (2 papers)
  3. Sam Budgett (3 papers)
  4. Mehrdad Yaghoobi (17 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.