Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Approach to Improve Robustness of NLP Systems against ASR Errors (2103.13610v1)

Published 25 Mar 2021 in cs.CL

Abstract: Speech-enabled systems typically first convert audio to text through an automatic speech recognition (ASR) model and then feed the text to downstream NLP modules. The errors of the ASR system can seriously downgrade the performance of the NLP modules. Therefore, it is essential to make them robust to the ASR errors. Previous work has shown it is effective to employ data augmentation methods to solve this problem by injecting ASR noise during the training process. In this paper, we utilize the prevalent pre-trained LLM to generate training samples with ASR-plausible noise. Compare to the previous methods, our approach generates ASR noise that better fits the real-world error distribution. Experimental results on spoken language translation(SLT) and spoken language understanding (SLU) show that our approach effectively improves the system robustness against the ASR errors and achieves state-of-the-art results on both tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tong Cui (3 papers)
  2. Jinghui Xiao (9 papers)
  3. Liangyou Li (36 papers)
  4. Xin Jiang (242 papers)
  5. Qun Liu (230 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.