Is Incoherence Surprising? Targeted Evaluation of Coherence Prediction from Language Models (2105.03495v1)
Abstract: Coherent discourse is distinguished from a mere collection of utterances by the satisfaction of a diverse set of constraints, for example choice of expression, logical relation between denoted events, and implicit compatibility with world-knowledge. Do neural LLMs encode such constraints? We design an extendable set of test suites addressing different aspects of discourse and dialogue coherence. Unlike most previous coherence evaluation studies, we address specific linguistic devices beyond sentence order perturbations, allowing for a more fine-grained analysis of what constitutes coherence and what neural models trained on a LLMling objective do encode. Extending the targeted evaluation paradigm for neural LLMs (Marvin and Linzen, 2018) to phenomena beyond syntax, we show that this paradigm is equally suited to evaluate linguistic qualities that contribute to the notion of coherence.
- Anne Beyer (4 papers)
- Sharid LoƔiciga (5 papers)
- David Schlangen (51 papers)