Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Controllable Model of Grounded Response Generation (2005.00613v2)

Published 1 May 2020 in cs.CL

Abstract: Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process, often resulting in uninteresting responses. Attempts to boost informativeness alone come at the expense of factual accuracy, as attested by pretrained LLMs' propensity to "hallucinate" facts. While this may be mitigated by access to background knowledge, there is scant guarantee of relevance and informativeness in generated responses. We propose a framework that we call controllable grounded response generation (CGRG), in which lexical control phrases are either provided by a user or automatically extracted by a control phrase predictor from dialogue context and grounding knowledge. Quantitative and qualitative results show that, using this framework, a transformer based model with a novel inductive attention mechanism, trained on a conversation-like Reddit dataset, outperforms strong generation baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Zeqiu Wu (15 papers)
  2. Michel Galley (50 papers)
  3. Chris Brockett (37 papers)
  4. Yizhe Zhang (127 papers)
  5. Xiang Gao (210 papers)
  6. Chris Quirk (11 papers)
  7. Rik Koncel-Kedziorski (19 papers)
  8. Jianfeng Gao (344 papers)
  9. Hannaneh Hajishirzi (176 papers)
  10. Mari Ostendorf (57 papers)
  11. Bill Dolan (45 papers)
Citations (81)