Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Consistent Dialogue Generation with Self-supervised Feature Learning (1903.05759v4)

Published 13 Mar 2019 in cs.CL

Abstract: Generating responses that are consistent with the dialogue context is one of the central challenges in building engaging conversational agents. We demonstrate that neural conversation models can be geared towards generating consistent responses by maintaining certain features related to topics and personas throughout the conversation. Past work has required external supervision that exploits features such as user identities that are often unavailable. In our approach, topic and persona feature extractors are trained using a contrastive training scheme that utilizes the natural structure of dialogue data. We further adopt a feature disentangling loss which, paired with controllable response generation techniques, allows us to promote or demote certain learned topics and persona features. Evaluation results demonstrate the model's ability to capture meaningful topics and persona features. The incorporation of the learned features brings significant improvement in terms of the quality of generated responses on two dialogue datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yizhe Zhang (127 papers)
  2. Xiang Gao (210 papers)
  3. Sungjin Lee (46 papers)
  4. Chris Brockett (37 papers)
  5. Michel Galley (50 papers)
  6. Jianfeng Gao (344 papers)
  7. Bill Dolan (45 papers)
Citations (28)