Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Context Generation Improves Open Domain Question Answering (2210.06349v2)

Published 12 Oct 2022 in cs.CL and cs.AI

Abstract: Closed-book question answering (QA) requires a model to directly answer an open-domain question without access to any external knowledge. Prior work on closed-book QA either directly finetunes or prompts a pretrained LLM (LM) to leverage the stored knowledge. However, they do not fully exploit the parameterized knowledge. To address this issue, we propose a two-stage, closed-book QA framework which employs a coarse-to-fine approach to extract relevant knowledge and answer a question. Our approach first generates a related context for a given question by prompting a pretrained LM. We then prompt the same LM for answer prediction using the generated context and the question. Additionally, to eliminate failure caused by context uncertainty, we marginalize over generated contexts. Experimental results on three QA benchmarks show that our method significantly outperforms previous closed-book QA methods (e.g. exact matching 68.6% vs. 55.3%), and is on par with open-book methods that exploit external knowledge sources (e.g. 68.6% vs. 68.0%). Our method is able to better exploit the stored knowledge in pretrained LMs without adding extra learnable parameters or needing finetuning, and paves the way for hybrid models that integrate pretrained LMs with external knowledge.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Dan Su (101 papers)
  2. Mostofa Patwary (34 papers)
  3. Shrimai Prabhumoye (40 papers)
  4. Peng Xu (357 papers)
  5. Ryan Prenger (10 papers)
  6. Mohammad Shoeybi (60 papers)
  7. Pascale Fung (151 papers)
  8. Anima Anandkumar (236 papers)
  9. Bryan Catanzaro (123 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.