Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Artificial Disfluency Detection, Uh No, Disfluency Generation for the Masses (2211.09235v1)

Published 16 Nov 2022 in cs.CL

Abstract: Existing approaches for disfluency detection typically require the existence of large annotated datasets. However, current datasets for this task are limited, suffer from class imbalance, and lack some types of disfluencies that can be encountered in real-world scenarios. This work proposes LARD, a method for automatically generating artificial disfluencies from fluent text. LARD can simulate all the different types of disfluencies (repetitions, replacements and restarts) based on the reparandum/interregnum annotation scheme. In addition, it incorporates contextual embeddings into the disfluency generation to produce realistic context-aware artificial disfluencies. Since the proposed method requires only fluent text, it can be used directly for training, bypassing the requirement of annotated disfluent data. Our empirical evaluation demonstrates that LARD can indeed be effectively used when no or only a few data are available. Furthermore, our detailed analysis suggests that the proposed method generates realistic disfluencies and increases the accuracy of existing disfluency detectors.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. T. Passali (2 papers)
  2. T. Mavropoulos (2 papers)
  3. G. Tsoumakas (2 papers)
  4. G. Meditskos (2 papers)
  5. S. Vrochidis (2 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.