Analyzing Mitigation Strategies for Catastrophic Forgetting in End-to-End Training of Spoken Language Models (2505.17496v1)
Abstract: End-to-end training of Spoken LLMs (SLMs) commonly involves adapting pre-trained text-based LLMs to the speech modality through multi-stage training on diverse tasks such as ASR, TTS and spoken question answering (SQA). Although this multi-stage continual learning equips LLMs with both speech understanding and generation capabilities, the substantial differences in task and data distributions across stages can lead to catastrophic forgetting, where previously acquired knowledge is lost. This paper investigates catastrophic forgetting and evaluates three mitigation strategies-model merging, discounting the LoRA scaling factor, and experience replay to balance knowledge retention with new learning. Results show that experience replay is the most effective, with further gains achieved by combining it with other methods. These findings provide insights for developing more robust and efficient SLM training pipelines.
- Chi-Yuan Hsiao (5 papers)
- Ke-Han Lu (16 papers)
- Kai-Wei Chang (292 papers)
- Chih-Kai Yang (13 papers)
- Wei-Chih Chen (20 papers)
- Hung-yi Lee (327 papers)