Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Models that Seek for Knowledge: Modular Search & Generation for Dialogue and Prompt Completion (2203.13224v2)

Published 24 Mar 2022 in cs.CL and cs.AI

Abstract: LLMs (LMs) have recently been shown to generate more factual responses by employing modularity (Zhou et al., 2021) in combination with retrieval (Adolphs et al., 2021). We extend the recent approach of Adolphs et al. (2021) to include internet search as a module. Our SeeKeR (Search engine->Knowledge->Response) method thus applies a single LM to three modular tasks in succession: search, generating knowledge, and generating a final response. We show that, when using SeeKeR as a dialogue model, it outperforms the state-of-the-art model BlenderBot 2 (Chen et al., 2021) on open-domain knowledge-grounded conversations for the same number of parameters, in terms of consistency, knowledge and per-turn engagingness. SeeKeR applied to topical prompt completions as a standard LLM outperforms GPT2 (Radford et al., 2019) and GPT3 (Brown et al., 2020) in terms of factuality and topicality, despite GPT3 being a vastly larger model. Our code and models are made publicly available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kurt Shuster (28 papers)
  2. Mojtaba Komeili (13 papers)
  3. Leonard Adolphs (10 papers)
  4. Stephen Roller (27 papers)
  5. Arthur Szlam (86 papers)
  6. Jason Weston (130 papers)
Citations (114)