Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech (2111.10367v3)

Published 19 Nov 2021 in cs.CL, cs.LG, cs.SD, and eess.AS

Abstract: Progress in speech processing has been facilitated by shared datasets and benchmarks. Historically these have focused on automatic speech recognition (ASR), speaker identification, or other lower-level tasks. Interest has been growing in higher-level spoken language understanding tasks, including using end-to-end models, but there are fewer annotated datasets for such tasks. At the same time, recent work shows the possibility of pre-training generic representations and then fine-tuning for several tasks using relatively little labeled data. We propose to create a suite of benchmark tasks for Spoken Language Understanding Evaluation (SLUE) consisting of limited-size labeled training sets and corresponding evaluation sets. This resource would allow the research community to track progress, evaluate pre-trained representations for higher-level tasks, and study open questions such as the utility of pipeline versus end-to-end approaches. We present the first phase of the SLUE benchmark suite, consisting of named entity recognition, sentiment analysis, and ASR on the corresponding datasets. We focus on naturally produced (not read or synthesized) speech, and freely available datasets. We provide new transcriptions and annotations on subsets of the VoxCeleb and VoxPopuli datasets, evaluation metrics and results for baseline models, and an open-source toolkit to reproduce the baselines and evaluate new models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Suwon Shon (31 papers)
  2. Ankita Pasad (14 papers)
  3. Felix Wu (30 papers)
  4. Pablo Brusco (3 papers)
  5. Yoav Artzi (51 papers)
  6. Karen Livescu (89 papers)
  7. Kyu J. Han (17 papers)
Citations (69)

Summary

We haven't generated a summary for this paper yet.