Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Sequence Transduction by Jointly Predicting Tokens and Durations (2304.06795v2)

Published 13 Apr 2023 in eess.AS, cs.CL, cs.LG, and cs.SD

Abstract: This paper introduces a novel Token-and-Duration Transducer (TDT) architecture for sequence-to-sequence tasks. TDT extends conventional RNN-Transducer architectures by jointly predicting both a token and its duration, i.e. the number of input frames covered by the emitted token. This is achieved by using a joint network with two outputs which are independently normalized to generate distributions over tokens and durations. During inference, TDT models can skip input frames guided by the predicted duration output, which makes them significantly faster than conventional Transducers which process the encoder output frame by frame. TDT models achieve both better accuracy and significantly faster inference than conventional Transducers on different sequence transduction tasks. TDT models for Speech Recognition achieve better accuracy and up to 2.82X faster inference than conventional Transducers. TDT models for Speech Translation achieve an absolute gain of over 1 BLEU on the MUST-C test compared with conventional Transducers, and its inference is 2.27X faster. In Speech Intent Classification and Slot Filling tasks, TDT models improve the intent accuracy by up to over 1% (absolute) over conventional Transducers, while running up to 1.28X faster. Our implementation of the TDT model will be open-sourced with the NeMo (https://github.com/NVIDIA/NeMo) toolkit.

Citations (12)

Summary

  • The paper introduces a novel TDT model that jointly predicts tokens and durations to significantly accelerate inference while enhancing accuracy.
  • It extends the RNN-T framework by deriving analytical gradients via an extended forward-backward algorithm for dual predictions.
  • Empirical results show up to 2.82x faster ASR inference and improved noise robustness compared to conventional models.

Efficient Sequence Transduction by Jointly Predicting Tokens and Durations

The paper presents a novel approach to sequence transduction through the Token-and-Duration Transducer (TDT), which enhances the conventional RNN-Transducer model by predicting both tokens and their durations. This dual prediction aims to improve both inference speed and accuracy across various sequence transduction tasks such as speech recognition, speech translation, and spoken language understanding.

Key Contributions

  1. TDT Architecture: The TDT model introduces a joint network with two independently normalized outputs—one predicting tokens and the other predicting token durations. This design allows the TDT to make use of frame-skipping during inference, substantially increasing processing speed compared to conventional transducers that process inputs frame-by-frame.
  2. Algorithmic Derivations: The authors extend the forward-backward algorithm to accommodate the TDT model. They derive analytical solutions for the gradients necessary for training, including solutions inspired by function merging techniques.
  3. Empirical Performance Improvement: TDT models demonstrate superior performance in ASR, speech translation, and SLU tasks. For speech recognition, TDT models achieved up to 2.82X faster inference compared to traditional RNN-T models while maintaining or improving accuracy. In the context of speech translation, TDT models provided an absolute gain of over 1 BLEU score and were 2.27X faster in inference.
  4. Noise Robustness: The paper reports that TDT models are more robust to noise, achieving better performance when tested with noisy variations of standard datasets. This robustness extends to handling repeated tokens, a common pitfall in existing RNN-T models, ensuring more reliable modeling in practical applications.
  5. Implementation and Open Source Release: The authors commit to open-sourcing their implementation via NVIDIA's NeMo toolkit, facilitating future research and development efforts.

Implications and Future Directions

The TDT model's ability to jointly predict tokens and durations presents significant potential for reducing computational requirements and increasing inference speed, making it particularly suitable for real-time applications where latency is critical. By skipping frames dynamically based on duration predictions, TDT models achieve faster processing without sacrificing accuracy, a crucial advantage in deploying ASR systems on resource-constrained devices.

The methodology applied in the TDT could be extrapolated to other sequence transduction tasks, potentially benefiting areas like machine translation and text generation. Moreover, its robust performance in noisy conditions opens up avenues for further research into deploying TDT models in environments with unpredictable audio quality, such as mobile devices and remote communication systems.

In terms of future research, exploring heuristic pruning methods for efficient beam search within the TDT framework could further enhance performance. Additionally, further investigation into optimizing the balancing between token and duration predictions to achieve even greater speed-ups without compromising accuracy would be advantageous.

Overall, the Token-and-Duration Transducer represents a meaningful advance in sequence transduction, aligning theoretical developments with practical applications and offering a solid foundation for the next wave of innovations in the field of speech processing and beyond.