Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Speak or Chat with Me: End-to-End Spoken Language Understanding System with Flexible Inputs (2104.05752v2)

Published 7 Apr 2021 in cs.CL, cs.LG, cs.SD, and eess.AS

Abstract: A major focus of recent research in spoken language understanding (SLU) has been on the end-to-end approach where a single model can predict intents directly from speech inputs without intermediate transcripts. However, this approach presents some challenges. First, since speech can be considered as personally identifiable information, in some cases only automatic speech recognition (ASR) transcripts are accessible. Second, intent-labeled speech data is scarce. To address the first challenge, we propose a novel system that can predict intents from flexible types of inputs: speech, ASR transcripts, or both. We demonstrate strong performance for either modality separately, and when both speech and ASR transcripts are available, through system combination, we achieve better results than using a single input modality. To address the second challenge, we leverage a semantically robust pre-trained BERT model and adopt a cross-modal system that co-trains text embeddings and acoustic embeddings in a shared latent space. We further enhance this system by utilizing an acoustic module pre-trained on LibriSpeech and domain-adapting the text module on our target datasets. Our experiments show significant advantages for these pre-training and fine-tuning strategies, resulting in a system that achieves competitive intent-classification performance on Snips SLU and Fluent Speech Commands datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Sujeong Cha (1 paper)
  2. Wangrui Hou (1 paper)
  3. Hyun Jung (4 papers)
  4. My Phung (1 paper)
  5. Michael Picheny (32 papers)
  6. Hong-Kwang Kuo (5 papers)
  7. Samuel Thomas (42 papers)
  8. Edmilson Morais (7 papers)
Citations (15)