Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding Semantics from Speech Through Pre-training (1909.10924v1)

Published 24 Sep 2019 in eess.AS, cs.CL, and cs.LG

Abstract: End-to-end Spoken Language Understanding (SLU) is proposed to infer the semantic meaning directly from audio features without intermediate text representation. Although the acoustic model component of an end-to-end SLU system can be pre-trained with Automatic Speech Recognition (ASR) targets, the SLU component can only learn semantic features from limited task-specific training data. In this paper, for the first time we propose to do large-scale unsupervised pre-training for the SLU component of an end-to-end SLU system, so that the SLU component may preserve semantic features from massive unlabeled audio data. As the output of the acoustic model component, i.e. phoneme posterior sequences, has much different characteristic from text sequences, we propose a novel pre-training model called BERT-PLM, which stands for Bidirectional Encoder Representations from Transformers through Permutation LLMing. BERT-PLM trains the SLU component on unlabeled data through a regression objective equivalent to the partial permutation LLMing objective, while leverages full bi-directional context information with BERT networks. The experiment results show that our approach out-perform the state-of-the-art end-to-end systems with over 12.5% error reduction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Pengwei Wang (29 papers)
  2. Liangchen Wei (1 paper)
  3. Yong Cao (33 papers)
  4. Jinghui Xie (5 papers)
  5. Yuji Cao (8 papers)
  6. Zaiqing Nie (27 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.