Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Express in Knowledge-Grounded Conversation (2204.05805v1)

Published 12 Apr 2022 in cs.CL

Abstract: Grounding dialogue generation by extra knowledge has shown great potentials towards building a system capable of replying with knowledgeable and engaging responses. Existing studies focus on how to synthesize a response with proper knowledge, yet neglect that the same knowledge could be expressed differently by speakers even under the same context. In this work, we mainly consider two aspects of knowledge expression, namely the structure of the response and style of the content in each part. We therefore introduce two sequential latent variables to represent the structure and the content style respectively. We propose a segmentation-based generation model and optimize the model by a variational approach to discover the underlying pattern of knowledge expression in a response. Evaluation results on two benchmarks indicate that our model can learn the structure style defined by a few examples and generate responses in desired content style.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xueliang Zhao (19 papers)
  2. Tingchen Fu (14 papers)
  3. Chongyang Tao (61 papers)
  4. Wei Wu (481 papers)
  5. Dongyan Zhao (144 papers)
  6. Rui Yan (250 papers)
Citations (6)