Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sequential Editing for Lifelong Training of Speech Recognition Models (2406.17935v2)

Published 25 Jun 2024 in cs.CL, cs.SD, and eess.AS

Abstract: Automatic Speech Recognition (ASR) traditionally assumes known domains, but adding data from a new domain raises concerns about computational inefficiencies linked to retraining models on both existing and new domains. Fine-tuning solely on new domain risks Catastrophic Forgetting (CF). To address this, Lifelong Learning (LLL) algorithms have been proposed for ASR. Prior research has explored techniques such as Elastic Weight Consolidation, Knowledge Distillation, and Replay, all of which necessitate either additional parameters or access to prior domain data. We propose Sequential Model Editing as a novel method to continually learn new domains in ASR systems. Different than previous methods, our approach does not necessitate access to prior datasets or the introduction of extra parameters. Our study demonstrates up to 15% Word Error Rate Reduction (WERR) over fine-tuning baseline, and superior efficiency over other LLL techniques on CommonVoice English multi-accent dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Devang Kulshreshtha (7 papers)
  2. Saket Dingliwal (22 papers)
  3. Brady Houston (4 papers)
  4. Nikolaos Pappas (188 papers)
  5. Srikanth Ronanki (23 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com