Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-level Adaptive Contrastive Learning for Knowledge Internalization in Dialogue Generation (2310.08943v2)

Published 13 Oct 2023 in cs.CL

Abstract: Knowledge-grounded dialogue generation aims to mitigate the issue of text degeneration by incorporating external knowledge to supplement the context. However, the model often fails to internalize this information into responses in a human-like manner. Instead, it simply inserts segments of the provided knowledge into generic responses. As a result, the generated responses tend to be tedious, incoherent, and in lack of interactivity which means the degeneration problem is still unsolved. In this work, we first find that such copying-style degeneration is primarily due to the weak likelihood objective, which allows the model to "cheat" the objective by merely duplicating knowledge segments in a superficial pattern matching based on overlap. To overcome this challenge, we then propose a Multi-level Adaptive Contrastive Learning (MACL) framework that dynamically samples negative examples and subsequently penalizes degeneration behaviors at both the token-level and sequence-level. Extensive experiments on the WoW dataset demonstrate the effectiveness of our approach across various pre-trained models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Chenxu Yang (11 papers)
  2. Zheng Lin (104 papers)
  3. Lanrui Wang (8 papers)
  4. Chong Tian (5 papers)
  5. Liang Pang (94 papers)
  6. Jiangnan Li (30 papers)
  7. Qirong Ho (28 papers)
  8. Yanan Cao (34 papers)
  9. Weiping Wang (123 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.