Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ASR-GLUE: A New Multi-task Benchmark for ASR-Robust Natural Language Understanding (2108.13048v2)

Published 30 Aug 2021 in cs.CL, cs.SD, and eess.AS

Abstract: Language understanding in speech-based systems have attracted much attention in recent years with the growing demand for voice interface applications. However, the robustness of natural language understanding (NLU) systems to errors introduced by automatic speech recognition (ASR) is under-examined. %To facilitate the research on ASR-robust general language understanding, In this paper, we propose ASR-GLUE benchmark, a new collection of 6 different NLU tasks for evaluating the performance of models under ASR error across 3 different levels of background noise and 6 speakers with various voice characteristics. Based on the proposed benchmark, we systematically investigate the effect of ASR error on NLU tasks in terms of noise intensity, error type and speaker variants. We further purpose two ways, correction-based method and data augmentation-based method to improve robustness of the NLU systems. Extensive experimental results and analysises show that the proposed methods are effective to some extent, but still far from human performance, demonstrating that NLU under ASR error is still very challenging and requires further research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Lingyun Feng (2 papers)
  2. Jianwei Yu (64 papers)
  3. Deng Cai (181 papers)
  4. Songxiang Liu (28 papers)
  5. Haitao Zheng (50 papers)
  6. Yan Wang (733 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.