Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Two-Turn Debate Doesn't Help Humans Answer Hard Reading Comprehension Questions (2210.10860v1)

Published 19 Oct 2022 in cs.CL

Abstract: The use of language-model-based question-answering systems to aid humans in completing difficult tasks is limited, in part, by the unreliability of the text these systems generate. Using hard multiple-choice reading comprehension questions as a testbed, we assess whether presenting humans with arguments for two competing answer options, where one is correct and the other is incorrect, allows human judges to perform more accurately, even when one of the arguments is unreliable and deceptive. If this is helpful, we may be able to increase our justified trust in language-model-based systems by asking them to produce these arguments where needed. Previous research has shown that just a single turn of arguments in this format is not helpful to humans. However, as debate settings are characterized by a back-and-forth dialogue, we follow up on previous results to test whether adding a second round of counter-arguments is helpful to humans. We find that, regardless of whether they have access to arguments or not, humans perform similarly on our task. These findings suggest that, in the case of answering reading comprehension questions, debate is not a helpful format.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Alicia Parrish (31 papers)
  2. Harsh Trivedi (29 papers)
  3. Nikita Nangia (17 papers)
  4. Vishakh Padmakumar (22 papers)
  5. Jason Phang (40 papers)
  6. Amanpreet Singh Saimbhi (2 papers)
  7. Samuel R. Bowman (103 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.