Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Disfluency Detection with Unlabeled Data and Small BERT Models (2104.10769v2)

Published 21 Apr 2021 in cs.CL

Abstract: Disfluency detection models now approach high accuracy on English text. However, little exploration has been done in improving the size and inference time of the model. At the same time, automatic speech recognition (ASR) models are moving from server-side inference to local, on-device inference. Supporting models in the transcription pipeline (like disfluency detection) must follow suit. In this work we concentrate on the disfluency detection task, focusing on small, fast, on-device models based on the BERT architecture. We demonstrate it is possible to train disfluency detection models as small as 1.3 MiB, while retaining high performance. We build on previous work that showed the benefit of data augmentation approaches such as self-training. Then, we evaluate the effect of domain mismatch between conversational and written text on model performance. We find that domain adaptation and data augmentation strategies have a more pronounced effect on these smaller models, as compared to conventional BERT models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Johann C. Rocholl (2 papers)
  2. Vicky Zayats (14 papers)
  3. Daniel D. Walker (3 papers)
  4. Noah B. Murad (1 paper)
  5. Aaron Schneider (5 papers)
  6. Daniel J. Liebling (4 papers)
Citations (26)