Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bayesian Transformer Language Models for Speech Recognition (2102.04754v1)

Published 9 Feb 2021 in cs.CL

Abstract: State-of-the-art neural LLMs (LMs) represented by Transformers are highly complex. Their use of fixed, deterministic parameter estimates fail to account for model uncertainty and lead to over-fitting and poor generalization when given limited training data. In order to address these issues, this paper proposes a full Bayesian learning framework for Transformer LM estimation. Efficient variational inference based approaches are used to estimate the latent parameter posterior distributions associated with different parts of the Transformer model architecture including multi-head self-attention, feed forward and embedding layers. Statistically significant word error rate (WER) reductions up to 0.5\% absolute (3.18\% relative) and consistent perplexity gains were obtained over the baseline Transformer LMs on state-of-the-art Switchboard corpus trained LF-MMI factored TDNN systems with i-Vector speaker adaptation. Performance improvements were also obtained on a cross domain LM adaptation task requiring porting a Transformer LM trained on the Switchboard and Fisher data to a low-resource DementiaBank elderly speech corpus.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Boyang Xue (23 papers)
  2. Jianwei Yu (64 papers)
  3. Junhao Xu (19 papers)
  4. Shansong Liu (19 papers)
  5. Shoukang Hu (38 papers)
  6. Zi Ye (20 papers)
  7. Mengzhe Geng (42 papers)
  8. Xunying Liu (92 papers)
  9. Helen Meng (204 papers)
Citations (24)