Parsing Speech: A Neural Approach to Integrating Lexical and Acoustic-Prosodic Information (1704.07287v2)
Abstract: In conversational speech, the acoustic signal provides cues that help listeners disambiguate difficult parses. For automatically parsing spoken utterances, we introduce a model that integrates transcribed text and acoustic-prosodic features using a convolutional neural network over energy and pitch trajectories coupled with an attention-based recurrent neural network that accepts text and prosodic features. We find that different types of acoustic-prosodic features are individually helpful, and together give statistically significant improvements in parse and disfluency detection F1 scores over a strong text-only baseline. For this study with known sentence boundaries, error analyses show that the main benefit of acoustic-prosodic features is in sentences with disfluencies, attachment decisions are most improved, and transcription errors obscure gains from prosody.
- Trang Tran (11 papers)
- Shubham Toshniwal (25 papers)
- Mohit Bansal (304 papers)
- Kevin Gimpel (72 papers)
- Karen Livescu (89 papers)
- Mari Ostendorf (57 papers)