Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Finding Generalizable Evidence by Learning to Convince Q&A Models (1909.05863v1)

Published 12 Sep 2019 in cs.CL, cs.AI, cs.IR, and cs.MA

Abstract: We propose a system that finds the strongest supporting evidence for a given answer to a question, using passage-based question-answering (QA) as a testbed. We train evidence agents to select the passage sentences that most convince a pretrained QA model of a given answer, if the QA model received those sentences instead of the full passage. Rather than finding evidence that convinces one model alone, we find that agents select evidence that generalizes; agent-chosen evidence increases the plausibility of the supported answer, as judged by other QA models and humans. Given its general nature, this approach improves QA in a robust manner: using agent-selected evidence (i) humans can correctly answer questions with only ~20% of the full passage and (ii) QA models can generalize to longer passages and harder questions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ethan Perez (55 papers)
  2. Siddharth Karamcheti (26 papers)
  3. Rob Fergus (67 papers)
  4. Jason Weston (130 papers)
  5. Douwe Kiela (85 papers)
  6. Kyunghyun Cho (292 papers)
Citations (35)

Summary

We haven't generated a summary for this paper yet.