Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stylized Knowledge-Grounded Dialogue Generation via Disentangled Template Rewriting (2204.05610v1)

Published 12 Apr 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Current Knowledge-Grounded Dialogue Generation (KDG) models specialize in producing rational and factual responses. However, to establish long-term relationships with users, the KDG model needs the capability to generate responses in a desired style or attribute. Thus, we study a new problem: Stylized Knowledge-Grounded Dialogue Generation (SKDG). It presents two challenges: (1) How to train a SKDG model where no <context, knowledge, stylized response> triples are available. (2) How to cohere with context and preserve the knowledge when generating a stylized response. In this paper, we propose a novel disentangled template rewriting (DTR) method which generates responses via combing disentangled style templates (from monolingual stylized corpus) and content templates (from KDG corpus). The entire framework is end-to-end differentiable and learned without supervision. Extensive experiments on two benchmarks indicate that DTR achieves a significant improvement on all evaluation metrics compared with previous state-of-the-art stylized dialogue generation methods. Besides, DTR achieves comparable performance with the state-of-the-art KDG methods in standard KDG evaluation setting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Qingfeng Sun (40 papers)
  2. Can Xu (98 papers)
  3. Huang Hu (18 papers)
  4. Yujing Wang (53 papers)
  5. Jian Miao (2 papers)
  6. Xiubo Geng (36 papers)
  7. Yining Chen (35 papers)
  8. Fei Xu (117 papers)
  9. Daxin Jiang (138 papers)
Citations (10)