Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Grounding Gaps in Language Model Generations (2311.09144v2)

Published 15 Nov 2023 in cs.CL and cs.HC

Abstract: Effective conversation requires common ground: a shared understanding between the participants. Common ground, however, does not emerge spontaneously in conversation. Speakers and listeners work together to both identify and construct a shared basis while avoiding misunderstanding. To accomplish grounding, humans rely on a range of dialogue acts, like clarification (What do you mean?) and acknowledgment (I understand.). However, it is unclear whether LLMs generate text that reflects human grounding. To this end, we curate a set of grounding acts and propose corresponding metrics that quantify attempted grounding. We study whether LLM generations contain grounding acts, simulating turn-taking from several dialogue datasets and comparing results to humans. We find that -- compared to humans -- LLMs generate language with less conversational grounding, instead generating text that appears to simply presume common ground. To understand the roots of the identified grounding gap, we examine the role of instruction tuning and preference optimization, finding that training on contemporary preference data leads to a reduction in generated grounding acts. Altogether, we highlight the need for more research investigating conversational grounding in human-AI interaction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Omar Shaikh (23 papers)
  2. Kristina Gligorić (22 papers)
  3. Ashna Khetan (1 paper)
  4. Matthias Gerstgrasser (11 papers)
  5. Diyi Yang (151 papers)
  6. Dan Jurafsky (118 papers)
Citations (11)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets