Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Role of Style in Parsing Speech with Neural Models (2010.04288v1)

Published 8 Oct 2020 in cs.CL

Abstract: The differences in written text and conversational speech are substantial; previous parsers trained on treebanked text have given very poor results on spontaneous speech. For spoken language, the mismatch in style also extends to prosodic cues, though it is less well understood. This paper re-examines the use of written text in parsing speech in the context of recent advances in neural language processing. We show that neural approaches facilitate using written text to improve parsing of spontaneous speech, and that prosody further improves over this state-of-the-art result. Further, we find an asymmetric degradation from read vs. spontaneous mismatch, with spontaneous speech more generally useful for training parsers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Trang Tran (11 papers)
  2. Jiahong Yuan (12 papers)
  3. Yang Liu (2253 papers)
  4. Mari Ostendorf (57 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.