Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Counterfactual Off-Policy Training for Neural Response Generation (2004.14507v2)

Published 29 Apr 2020 in cs.LG, cs.AI, and cs.CL

Abstract: Open-domain dialogue generation suffers from the data insufficiency problem due to the vast size of potential responses. In this paper, we propose to explore potential responses by counterfactual reasoning. Given an observed response, the counterfactual reasoning model automatically infers the outcome of an alternative policy that could have been taken. The resulting counterfactual response synthesized in hindsight is of higher quality than the response synthesized from scratch. Training on the counterfactual responses under the adversarial learning framework helps to explore the high-reward area of the potential response space. An empirical study on the DailyDialog dataset shows that our approach significantly outperforms the HRED model as well as the conventional adversarial learning approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Qingfu Zhu (39 papers)
  2. Weinan Zhang (322 papers)
  3. Ting Liu (329 papers)
  4. William Yang Wang (254 papers)
Citations (1)