Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Retrieval-Free Knowledge-Grounded Dialogue Response Generation with Adapters (2105.06232v5)

Published 13 May 2021 in cs.CL and cs.AI

Abstract: To diversify and enrich generated dialogue responses, knowledge-grounded dialogue has been investigated in recent years. The existing methods tackle the knowledge grounding challenge by retrieving the relevant sentences over a large corpus and augmenting the dialogues with explicit extra information. Despite their success, however, the existing works have drawbacks in inference efficiency. This paper proposes KnowExpert, a framework to bypass the explicit retrieval process and inject knowledge into the pre-trained LLMs with lightweight adapters and adapt to the knowledge-grounded dialogue task. To the best of our knowledge, this is the first attempt to tackle this challenge without retrieval in this task under an open-domain chit-chat scenario. The experimental results show that Knowexpert performs comparably with some retrieval-based baselines while being time-efficient in inference, demonstrating the effectiveness of our proposed method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yan Xu (258 papers)
  2. Etsuko Ishii (18 papers)
  3. Samuel Cahyawijaya (75 papers)
  4. Zihan Liu (102 papers)
  5. Genta Indra Winata (94 papers)
  6. Andrea Madotto (64 papers)
  7. Dan Su (101 papers)
  8. Pascale Fung (150 papers)
Citations (40)