Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Efficient-tuning Methods in Self-supervised Speech Models (2210.06175v3)

Published 10 Oct 2022 in eess.AS, cs.LG, and cs.SD

Abstract: In this study, we aim to explore efficient tuning methods for speech self-supervised learning. Recent studies show that self-supervised learning (SSL) can learn powerful representations for different speech tasks. However, fine-tuning pre-trained models for each downstream task is parameter-inefficient since SSL models are notoriously large with millions of parameters. Adapters are lightweight modules commonly used in NLP to solve this problem. In downstream tasks, the parameters of SSL models are frozen, and only the adapters are trained. Given the lack of studies generally exploring the effectiveness of adapters for self-supervised speech tasks, we intend to fill this gap by adding various adapter modules in pre-trained speech SSL models. We show that the performance parity can be achieved with over 90% parameter reduction, and discussed the pros and cons of efficient tuning techniques. This is the first comprehensive investigation of various adapter types across speech tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zih-Ching Chen (11 papers)
  2. Chin-Lun Fu (3 papers)
  3. Chih-Ying Liu (2 papers)
  4. Shang-Wen Li (55 papers)
  5. Hung-yi Lee (325 papers)
Citations (37)