Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Memory Augmented Lookup Dictionary based Language Modeling for Automatic Speech Recognition (2301.00066v1)

Published 30 Dec 2022 in cs.CL and eess.AS

Abstract: Recent studies have shown that using an external LLM (LM) benefits the end-to-end Automatic Speech Recognition (ASR). However, predicting tokens that appear less frequently in the training set is still quite challenging. The long-tail prediction problems have been widely studied in many applications, but only been addressed by a few studies for ASR and LMs. In this paper, we propose a new memory augmented lookup dictionary based Transformer architecture for LM. The newly introduced lookup dictionary incorporates rich contextual information in training set, which is vital to correctly predict long-tail tokens. With intensive experiments on Chinese and English data sets, our proposed method is proved to outperform the baseline Transformer LM by a great margin on both word/character error rate and tail tokens error rate. This is achieved without impact on the decoding efficiency. Overall, we demonstrate the effectiveness of our proposed method in boosting the ASR decoding performance, especially for long-tail tokens.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yukun Feng (7 papers)
  2. Ming Tu (20 papers)
  3. Rui Xia (53 papers)
  4. Chuanzeng Huang (10 papers)
  5. Yuxuan Wang (239 papers)

Summary

We haven't generated a summary for this paper yet.