Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Auxiliary Sequence Labeling Tasks for Disfluency Detection (2011.04512v2)

Published 24 Oct 2020 in cs.CL and cs.LG

Abstract: Detecting disfluencies in spontaneous speech is an important preprocessing step in natural language processing and speech recognition applications. Existing works for disfluency detection have focused on designing a single objective only for disfluency detection, while auxiliary objectives utilizing linguistic information of a word such as named entity or part-of-speech information can be effective. In this paper, we focus on detecting disfluencies on spoken transcripts and propose a method utilizing named entity recognition (NER) and part-of-speech (POS) as auxiliary sequence labeling (SL) tasks for disfluency detection. First, we investigate cases that utilizing linguistic information of a word can prevent mispredicting important words and can be helpful for the correct detection of disfluencies. Second, we show that training a disfluency detection model with auxiliary SL tasks can improve its F-score in disfluency detection. Then, we analyze which auxiliary SL tasks are influential depending on baseline models. Experimental results on the widely used English Switchboard dataset show that our method outperforms the previous state-of-the-art in disfluency detection.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Dongyub Lee (9 papers)
  2. Byeongil Ko (6 papers)
  3. Myeong Cheol Shin (5 papers)
  4. Taesun Whang (9 papers)
  5. Daniel Lee (45 papers)
  6. Eun Hwa Kim (1 paper)
  7. EungGyun Kim (8 papers)
  8. Jaechoon Jo (2 papers)
Citations (8)