Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Evaluation of Speech Foundation Models for Spoken Language Understanding (2406.10083v1)

Published 14 Jun 2024 in cs.CL, cs.SD, and eess.AS

Abstract: The Spoken Language Understanding Evaluation (SLUE) suite of benchmark tasks was recently introduced to address the need for open resources and benchmarking of complex spoken language understanding (SLU) tasks, including both classification and sequence generation tasks, on natural speech. The benchmark has demonstrated preliminary success in using pre-trained speech foundation models (SFM) for these SLU tasks. However, the community still lacks a fine-grained understanding of the comparative utility of different SFMs. Inspired by this, we ask: which SFMs offer the most benefits for these complex SLU tasks, and what is the most effective approach for incorporating these SFMs? To answer this, we perform an extensive evaluation of multiple supervised and self-supervised SFMs using several evaluation protocols: (i) frozen SFMs with a lightweight prediction head, (ii) frozen SFMs with a complex prediction head, and (iii) fine-tuned SFMs with a lightweight prediction head. Although the supervised SFMs are pre-trained on much more speech recognition data (with labels), they do not always outperform self-supervised SFMs; the latter tend to perform at least as well as, and sometimes better than, supervised SFMs, especially on the sequence generation tasks in SLUE. While there is no universally optimal way of incorporating SFMs, the complex prediction head gives the best performance for most tasks, although it increases the inference time. We also introduce an open-source toolkit and performance leaderboard, SLUE-PERB, for these tasks and modeling strategies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Siddhant Arora (50 papers)
  2. Ankita Pasad (14 papers)
  3. Chung-Ming Chien (13 papers)
  4. Jionghao Han (7 papers)
  5. Roshan Sharma (24 papers)
  6. Jee-weon Jung (69 papers)
  7. Hira Dhamyal (16 papers)
  8. William Chen (49 papers)
  9. Suwon Shon (31 papers)
  10. Hung-yi Lee (327 papers)
  11. Karen Livescu (89 papers)
  12. Shinji Watanabe (416 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.