Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Reducing the Need for Speech Training Data To Build Spoken Language Understanding Systems (2203.00006v1)

Published 26 Feb 2022 in cs.CL, cs.SD, and eess.AS

Abstract: The lack of speech data annotated with labels required for spoken language understanding (SLU) is often a major hurdle in building end-to-end (E2E) systems that can directly process speech inputs. In contrast, large amounts of text data with suitable labels are usually available. In this paper, we propose a novel text representation and training methodology that allows E2E SLU systems to be effectively constructed using these text resources. With very limited amounts of additional speech, we show that these models can be further improved to perform at levels close to similar systems built on the full speech datasets. The efficacy of our proposed approach is demonstrated on both intent and entity tasks using three different SLU datasets. With text-only training, the proposed system achieves up to 90% of the performance possible with full speech training. With just an additional 10% of speech data, these models significantly improve further to 97% of full performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Samuel Thomas (42 papers)
  2. Hong-Kwang J. Kuo (11 papers)
  3. Brian Kingsbury (54 papers)
  4. George Saon (39 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.