Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ST-BERT: Cross-modal Language Model Pre-training For End-to-end Spoken Language Understanding (2010.12283v2)

Published 23 Oct 2020 in cs.CL and cs.LG

Abstract: LLM pre-training has shown promising results in various downstream tasks. In this context, we introduce a cross-modal pre-trained LLM, called Speech-Text BERT (ST-BERT), to tackle end-to-end spoken language understanding (E2E SLU) tasks. Taking phoneme posterior and subword-level text as an input, ST-BERT learns a contextualized cross-modal alignment via our two proposed pre-training tasks: Cross-modal Masked LLMing (CM-MLM) and Cross-modal Conditioned LLMing (CM-CLM). Experimental results on three benchmarks present that our approach is effective for various SLU datasets and shows a surprisingly marginal performance degradation even when 1% of the training data are available. Also, our method shows further SLU performance gain via domain-adaptive pre-training with domain-specific speech-text pair data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Minjeong Kim (26 papers)
  2. Gyuwan Kim (20 papers)
  3. Sang-Woo Lee (34 papers)
  4. Jung-Woo Ha (67 papers)
Citations (34)