Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Rehearsal-Free Multilingual ASR: A LoRA-based Case Study on Whisper (2408.10680v1)

Published 20 Aug 2024 in cs.CL, cs.SD, and eess.AS

Abstract: Pre-trained multilingual speech foundation models, like Whisper, have shown impressive performance across different languages. However, adapting these models to new or specific languages is computationally extensive and faces catastrophic forgetting problems. Addressing these issues, our study investigates strategies to enhance the model on new languages in the absence of original training data, while also preserving the established performance on the original languages. Specifically, we first compare various LoRA-based methods to find out their vulnerability to forgetting. To mitigate this issue, we propose to leverage the LoRA parameters from the original model for approximate orthogonal gradient descent on the new samples. Additionally, we also introduce a learnable rank coefficient to allocate trainable parameters for more efficient training. Our experiments with a Chinese Whisper model (for Uyghur and Tibetan) yield better results with a more compact parameter set.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Tianyi Xu (39 papers)
  2. Kaixun Huang (8 papers)
  3. Pengcheng Guo (55 papers)
  4. Yu Zhou (335 papers)
  5. Longtao Huang (27 papers)
  6. Hui Xue (109 papers)
  7. Lei Xie (337 papers)

Summary

We haven't generated a summary for this paper yet.