Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Blank Collapse: Compressing CTC emission for the faster decoding (2210.17017v2)

Published 31 Oct 2022 in cs.CL, cs.SD, and eess.AS

Abstract: Connectionist Temporal Classification (CTC) model is a very efficient method for modeling sequences, especially for speech data. In order to use CTC model as an Automatic Speech Recognition (ASR) task, the beam search decoding with an external LLM like n-gram LM is necessary to obtain reasonable results. In this paper we analyze the blank label in CTC beam search deeply and propose a very simple method to reduce the amount of calculation resulting in faster beam search decoding speed. With this method, we can get up to 78% faster decoding speed than ordinary beam search decoding with a very small loss of accuracy in LibriSpeech datasets. We prove this method is effective not only practically by experiments but also theoretically by mathematical reasoning. We also observe that this reduction is more obvious if the accuracy of the model is higher.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Minkyu Jung (2 papers)
  2. Ohhyeok Kwon (1 paper)
  3. Seunghyun Seo (18 papers)
  4. Soonshin Seo (6 papers)
Citations (3)