Evaluating Parameter-Efficient Transfer Learning Approaches on SURE Benchmark for Speech Understanding (2303.03267v1)
Abstract: Fine-tuning is widely used as the default algorithm for transfer learning from pre-trained models. Parameter inefficiency can however arise when, during transfer learning, all the parameters of a large pre-trained model need to be updated for individual downstream tasks. As the number of parameters grows, fine-tuning is prone to overfitting and catastrophic forgetting. In addition, full fine-tuning can become prohibitively expensive when the model is used for many tasks. To mitigate this issue, parameter-efficient transfer learning algorithms, such as adapters and prefix tuning, have been proposed as a way to introduce a few trainable parameters that can be plugged into large pre-trained LLMs such as BERT, and HuBERT. In this paper, we introduce the Speech UndeRstanding Evaluation (SURE) benchmark for parameter-efficient learning for various speech-processing tasks. Additionally, we introduce a new adapter, ConvAdapter, based on 1D convolution. We show that ConvAdapter outperforms the standard adapters while showing comparable performance against prefix tuning and LoRA with only 0.94% of trainable parameters on some of the task in SURE. We further explore the effectiveness of parameter efficient transfer learning for speech synthesis task such as Text-to-Speech (TTS).
- Yingting Li (8 papers)
- Ambuj Mehrish (15 papers)
- Shuai Zhao (116 papers)
- Rishabh Bhardwaj (30 papers)
- Amir Zadeh (36 papers)
- Navonil Majumder (48 papers)
- Rada Mihalcea (131 papers)
- Soujanya Poria (138 papers)