Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Corpus Synthesis for Zero-shot ASR domain Adaptation using Large Language Models (2309.10707v1)

Published 18 Sep 2023 in eess.AS, cs.CL, cs.LG, and cs.SD

Abstract: While Automatic Speech Recognition (ASR) systems are widely used in many real-world applications, they often do not generalize well to new domains and need to be finetuned on data from these domains. However, target-domain data usually are not readily available in many scenarios. In this paper, we propose a new strategy for adapting ASR models to new target domains without any text or speech from those domains. To accomplish this, we propose a novel data synthesis pipeline that uses a LLM to generate a target domain text corpus, and a state-of-the-art controllable speech synthesis model to generate the corresponding speech. We propose a simple yet effective in-context instruction finetuning strategy to increase the effectiveness of LLM in generating text corpora for new domains. Experiments on the SLURP dataset show that the proposed method achieves an average relative word error rate improvement of $28\%$ on unseen target domains without any performance drop in source domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hsuan Su (11 papers)
  2. Ting-Yao Hu (13 papers)
  3. Hema Swetha Koppula (8 papers)
  4. Raviteja Vemulapalli (29 papers)
  5. Jen-Hao Rick Chang (18 papers)
  6. Karren Yang (12 papers)
  7. Gautam Varma Mantena (2 papers)
  8. Oncel Tuzel (62 papers)
Citations (1)