Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking via Side-by-Side Evaluation (2406.00179v1)

Published 31 May 2024 in cs.CL and cs.AI

Abstract: We explore the use of long-context capabilities in LLMs to create synthetic reading comprehension data from entire books. Previous efforts to construct such datasets relied on crowd-sourcing, but the emergence of transformers with a context size of 1 million or more tokens now enables entirely automatic approaches. Our objective is to test the capabilities of LLMs to analyze, understand, and reason over problems that require a detailed comprehension of long spans of text, such as questions involving character arcs, broader themes, or the consequences of early actions later in the story. We propose a holistic pipeline for automatic data generation including question generation, answering, and model scoring using an ``Evaluator''. We find that a relative approach, comparing answers between models in a pairwise fashion and ranking with a Bradley-Terry model, provides a more consistent and differentiating scoring mechanism than an absolute scorer that rates answers individually. We also show that LLMs from different model families produce moderate agreement in their ratings. We ground our approach using the manually curated NarrativeQA dataset, where our evaluator shows excellent agreement with human judgement and even finds errors in the dataset. Using our automatic evaluation approach, we show that using an entire book as context produces superior reading comprehension performance compared to baseline no-context (parametric knowledge only) and retrieval-based approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Bernd Bohnet (21 papers)
  2. Kevin Swersky (51 papers)
  3. Rosanne Liu (25 papers)
  4. Pranjal Awasthi (67 papers)
  5. Azade Nova (13 papers)
  6. Javier Snaider (4 papers)
  7. Hanie Sedghi (35 papers)
  8. Aaron T Parisi (2 papers)
  9. Michael Collins (46 papers)
  10. Angeliki Lazaridou (34 papers)
  11. Orhan Firat (80 papers)
  12. Noah Fiedel (22 papers)
Citations (3)