Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A novel multimodal dynamic fusion network for disfluency detection in spoken utterances (2211.14700v1)

Published 27 Nov 2022 in cs.CL and eess.AS

Abstract: Disfluency, though originating from human spoken utterances, is primarily studied as a uni-modal text-based NLP task. Based on early-fusion and self-attention-based multimodal interaction between text and acoustic modalities, in this paper, we propose a novel multimodal architecture for disfluency detection from individual utterances. Our architecture leverages a multimodal dynamic fusion network that adds minimal parameters over an existing text encoder commonly used in prior art to leverage the prosodic and acoustic cues hidden in speech. Through experiments, we show that our proposed model achieves state-of-the-art results on the widely used English Switchboard for disfluency detection and outperforms prior unimodal and multimodal systems in literature by a significant margin. In addition, we make a thorough qualitative analysis and show that, unlike text-only systems, which suffer from spurious correlations in the data, our system overcomes this problem through additional cues from speech signals. We make all our codes publicly available on GitHub.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sreyan Ghosh (46 papers)
  2. Utkarsh Tyagi (18 papers)
  3. Sonal Kumar (30 papers)
  4. Manan Suri (32 papers)
  5. Rajiv Ratn Shah (108 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.