Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SpeechGLUE: How Well Can Self-Supervised Speech Models Capture Linguistic Knowledge? (2306.08374v2)

Published 14 Jun 2023 in cs.CL, cs.SD, and eess.AS

Abstract: Self-supervised learning (SSL) for speech representation has been successfully applied in various downstream tasks, such as speech and speaker recognition. More recently, speech SSL models have also been shown to be beneficial in advancing spoken language understanding tasks, implying that the SSL models have the potential to learn not only acoustic but also linguistic information. In this paper, we aim to clarify if speech SSL techniques can well capture linguistic knowledge. For this purpose, we introduce SpeechGLUE, a speech version of the General Language Understanding Evaluation (GLUE) benchmark. Since GLUE comprises a variety of natural language understanding tasks, SpeechGLUE can elucidate the degree of linguistic ability of speech SSL models. Experiments demonstrate that speech SSL models, although inferior to text-based SSL models, perform better than baselines, suggesting that they can acquire a certain amount of general linguistic knowledge from just unlabeled speech data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Takanori Ashihara (28 papers)
  2. Takafumi Moriya (30 papers)
  3. Kohei Matsuura (26 papers)
  4. Tomohiro Tanaka (37 papers)
  5. Yusuke Ijima (11 papers)
  6. Taichi Asami (6 papers)
  7. Marc Delcroix (94 papers)
  8. Yukinori Honma (1 paper)
Citations (10)

Summary

We haven't generated a summary for this paper yet.