Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Context-Aware Transformer Transducer for Speech Recognition (2111.03250v1)

Published 5 Nov 2021 in cs.CL, cs.LG, cs.SD, and eess.AS

Abstract: End-to-end (E2E) automatic speech recognition (ASR) systems often have difficulty recognizing uncommon words, that appear infrequently in the training data. One promising method, to improve the recognition accuracy on such rare words, is to latch onto personalized/contextual information at inference. In this work, we present a novel context-aware transformer transducer (CATT) network that improves the state-of-the-art transformer-based ASR system by taking advantage of such contextual signals. Specifically, we propose a multi-head attention-based context-biasing network, which is jointly trained with the rest of the ASR sub-networks. We explore different techniques to encode contextual data and to create the final attention context vectors. We also leverage both BLSTM and pretrained BERT based models to encode contextual data and guide the network training. Using an in-house far-field dataset, we show that CATT, using a BERT based context encoder, improves the word error rate of the baseline transformer transducer and outperforms an existing deep contextual model by 24.2% and 19.4% respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Feng-Ju Chang (15 papers)
  2. Jing Liu (525 papers)
  3. Martin Radfar (17 papers)
  4. Athanasios Mouchtaris (31 papers)
  5. Maurizio Omologo (15 papers)
  6. Ariya Rastrow (55 papers)
  7. Siegfried Kunzmann (13 papers)
Citations (75)