Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Few-Shot Data Synthesis for Open Domain Multi-Hop Question Answering (2305.13691v2)

Published 23 May 2023 in cs.CL

Abstract: Few-shot learning for open domain multi-hop question answering typically relies on the incontext learning capability of LLMs. While powerful, these LLMs usually contain tens or hundreds of billions of parameters, making them rather inefficient at inference time. To improve performance of smaller LLMs, we propose a data synthesis framework for multi-hop question answering that requires less than 10 human annotated question answer pairs. Our framework depends only on rich, naturally-occurring relationships among documents and is built upon the data generation functions parameterized by LLMs and prompts. We synthesize millions of multi-hop questions and claims to finetune LLMs, evaluated on popular benchmarks for multi-hop question answering and fact verification. Empirically, our approach improves model performance significantly, allowing the finetuned models to be competitive with GPT-3.5 based approaches while being almost one-third the size in parameter count.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Mingda Chen (25 papers)
  2. Xilun Chen (31 papers)
  3. Wen-tau Yih (84 papers)
Citations (6)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets