Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Comparison of Decoding Strategies for CTC Acoustic Models (1708.04469v1)

Published 15 Aug 2017 in cs.CL

Abstract: Connectionist Temporal Classification has recently attracted a lot of interest as it offers an elegant approach to building acoustic models (AMs) for speech recognition. The CTC loss function maps an input sequence of observable feature vectors to an output sequence of symbols. Output symbols are conditionally independent of each other under CTC loss, so a LLM (LM) can be incorporated conveniently during decoding, retaining the traditional separation of acoustic and linguistic components in ASR. For fixed vocabularies, Weighted Finite State Transducers provide a strong baseline for efficient integration of CTC AMs with n-gram LMs. Character-based neural LMs provide a straight forward solution for open vocabulary speech recognition and all-neural models, and can be decoded with beam search. Finally, sequence-to-sequence models can be used to translate a sequence of individual sounds into a word string. We compare the performance of these three approaches, and analyze their error patterns, which provides insightful guidance for future research and development in this important area.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Thomas Zenkel (5 papers)
  2. Ramon Sanabria (22 papers)
  3. Florian Metze (79 papers)
  4. Jan Niehues (76 papers)
  5. Matthias Sperber (24 papers)
  6. Sebastian Stüker (11 papers)
  7. Alex Waibel (48 papers)
Citations (41)