Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Streaming Joint Speech Recognition and Disfluency Detection (2211.08726v2)

Published 16 Nov 2022 in cs.CL, cs.SD, and eess.AS

Abstract: Disfluency detection has mainly been solved in a pipeline approach, as post-processing of speech recognition. In this study, we propose Transformer-based encoder-decoder models that jointly solve speech recognition and disfluency detection, which work in a streaming manner. Compared to pipeline approaches, the joint models can leverage acoustic information that makes disfluency detection robust to recognition errors and provide non-verbal clues. Moreover, joint modeling results in low-latency and lightweight inference. We investigate two joint model variants for streaming disfluency detection: a transcript-enriched model and a multi-task model. The transcript-enriched model is trained on text with special tags indicating the starting and ending points of the disfluent part. However, it has problems with latency and standard LLM adaptation, which arise from the additional disfluency tags. We propose a multi-task model to solve such problems, which has two output layers at the Transformer decoder; one for speech recognition and the other for disfluency detection. It is modeled to be conditioned on the currently recognized token with an additional token-dependency mechanism. We show that the proposed joint models outperformed a BERT-based pipeline approach in both accuracy and latency, on both the Switchboard and the corpus of spontaneous Japanese.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hayato Futami (24 papers)
  2. Emiru Tsunoo (34 papers)
  3. Kentaro Shibata (3 papers)
  4. Yosuke Kashiwagi (29 papers)
  5. Takao Okuda (1 paper)
  6. Siddhant Arora (50 papers)
  7. Shinji Watanabe (416 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.