Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing Mitigation Strategies for Catastrophic Forgetting in End-to-End Training of Spoken Language Models (2505.17496v1)

Published 23 May 2025 in cs.CL, cs.AI, cs.LG, cs.SD, and eess.AS

Abstract: End-to-end training of Spoken LLMs (SLMs) commonly involves adapting pre-trained text-based LLMs to the speech modality through multi-stage training on diverse tasks such as ASR, TTS and spoken question answering (SQA). Although this multi-stage continual learning equips LLMs with both speech understanding and generation capabilities, the substantial differences in task and data distributions across stages can lead to catastrophic forgetting, where previously acquired knowledge is lost. This paper investigates catastrophic forgetting and evaluates three mitigation strategies-model merging, discounting the LoRA scaling factor, and experience replay to balance knowledge retention with new learning. Results show that experience replay is the most effective, with further gains achieved by combining it with other methods. These findings provide insights for developing more robust and efficient SLM training pipelines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chi-Yuan Hsiao (5 papers)
  2. Ke-Han Lu (16 papers)
  3. Kai-Wei Chang (292 papers)
  4. Chih-Kai Yang (13 papers)
  5. Wei-Chih Chen (20 papers)
  6. Hung-yi Lee (327 papers)