Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Probing Commonsense Explanation in Dialogue Response Generation (2104.09574v4)

Published 19 Apr 2021 in cs.CL, cs.AI, and cs.LG

Abstract: Humans use commonsense reasoning (CSR) implicitly to produce natural and coherent responses in conversations. Aiming to close the gap between current response generation (RG) models and human communication abilities, we want to understand why RG models respond as they do by probing RG model's understanding of commonsense reasoning that elicits proper responses. We formalize the problem by framing commonsense as a latent variable in the RG task and using explanations for responses as textual form of commonsense. We collect 6k annotated explanations justifying responses from four dialogue datasets and ask humans to verify them and propose two probing settings to evaluate RG models' CSR capabilities. Probing results show that models fail to capture the logical relations between commonsense explanations and responses and fine-tuning on in-domain data and increasing model sizes do not lead to understanding of CSR for RG. We hope our study motivates more research in making RG models emulate the human reasoning process in pursuit of smooth human-AI communication.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Pei Zhou (30 papers)
  2. Pegah Jandaghi (9 papers)
  3. Bill Yuchen Lin (72 papers)
  4. Justin Cho (1 paper)
  5. Jay Pujara (44 papers)
  6. Xiang Ren (194 papers)
Citations (15)