Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Effective and Efficient Non-autoregressive Decoding Using Block-based Attention Mask (2406.10034v3)

Published 14 Jun 2024 in cs.SD, cs.AI, and eess.AS

Abstract: This paper proposes a novel non-autoregressive (NAR) block-based Attention Mask Decoder (AMD) that flexibly balances performance-efficiency trade-offs for Conformer ASR systems. AMD performs parallel NAR inference within contiguous blocks of output labels that are concealed using attention masks, while conducting left-to-right AR prediction and history context amalgamation between blocks. A beam search algorithm is designed to leverage a dynamic fusion of CTC, AR Decoder, and AMD probabilities. Experiments on the LibriSpeech-100hr corpus suggest the tripartite Decoder incorporating the AMD module produces a maximum decoding speed-up ratio of 1.73x over the baseline CTC+AR decoding, while incurring no statistically significant word error rate (WER) increase on the test sets. When operating with the same decoding real time factors, statistically significant WER reductions of up to 0.7% and 0.3% absolute (5.3% and 6.1% relative) were obtained over the CTC+AR baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Tianzi Wang (37 papers)
  2. Xurong Xie (38 papers)
  3. Zhaoqing Li (16 papers)
  4. Shoukang Hu (38 papers)
  5. Jiajun Deng (75 papers)
  6. Mingyu Cui (31 papers)
  7. Shujie Hu (36 papers)
  8. Mengzhe Geng (42 papers)
  9. Guinan Li (23 papers)
  10. Helen Meng (204 papers)
  11. Xunying Liu (92 papers)
  12. Zengrui Jin (30 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com