Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SLUE Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding Tasks (2212.10525v2)

Published 20 Dec 2022 in cs.CL and eess.AS

Abstract: Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community, but have not received as much attention as lower-level tasks like speech and speaker recognition. In particular, there are not nearly as many SLU task benchmarks, and many of the existing ones use data that is not freely available to all researchers. Recent work has begun to introduce such benchmark datasets for several tasks. In this work, we introduce several new annotated SLU benchmark tasks based on freely available speech data, which complement existing benchmarks and address gaps in the SLU evaluation landscape. We contribute four tasks: question answering and summarization involve inference over longer speech sequences; named entity localization addresses the speech-specific task of locating the targeted content in the signal; dialog act classification identifies the function of a given speech utterance. We follow the blueprint of the Spoken Language Understanding Evaluation (SLUE) benchmark suite. In order to facilitate the development of SLU models that leverage the success of pre-trained speech representations, we will be publishing for each task (i) annotations for a relatively small fine-tuning set, (ii) annotated development and test sets, and (iii) baseline models for easy reproducibility and comparisons. In this work, we present the details of data collection and annotation and the performance of the baseline models. We also perform sensitivity analysis of pipeline models' performance (speech recognizer + text model) to the speech recognition accuracy, using more than 20 state-of-the-art speech recognition models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Suwon Shon (31 papers)
  2. Siddhant Arora (50 papers)
  3. Chyi-Jiunn Lin (6 papers)
  4. Ankita Pasad (14 papers)
  5. Felix Wu (30 papers)
  6. Roshan Sharma (24 papers)
  7. Wei-Lun Wu (2 papers)
  8. Karen Livescu (89 papers)
  9. Shinji Watanabe (416 papers)
  10. Hung-yi Lee (325 papers)
Citations (24)