Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Think Before You Speak: Explicitly Generating Implicit Commonsense Knowledge for Response Generation (2110.08501v4)

Published 16 Oct 2021 in cs.CL

Abstract: Implicit knowledge, such as common sense, is key to fluid human conversations. Current neural response generation (RG) models are trained to generate responses directly, omitting unstated implicit knowledge. In this paper, we present Think-Before-Speaking (TBS), a generative approach to first externalize implicit commonsense knowledge (think) and use this knowledge to generate responses (speak). We expect that externalizing implicit knowledge allows more efficient learning, produces more informative responses, and enables more explainable models. We analyze different choices to collect knowledge-aligned dialogues, represent implicit knowledge, and transition between knowledge and dialogues. Empirical results show TBS models outperform end-to-end and knowledge-augmented RG baselines on most automatic metrics and generate more informative, specific, and commonsense-following responses, as evaluated by human annotators. TBS also generates knowledge that makes sense and is relevant to the dialogue around 85\% of the time.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Pei Zhou (30 papers)
  2. Karthik Gopalakrishnan (34 papers)
  3. Behnam Hedayatnia (27 papers)
  4. Seokhwan Kim (29 papers)
  5. Jay Pujara (44 papers)
  6. Xiang Ren (194 papers)
  7. Yang Liu (2253 papers)
  8. Dilek Hakkani-Tur (94 papers)
Citations (39)