Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension (2109.06853v1)

Published 14 Sep 2021 in cs.CL

Abstract: How can we generate concise explanations for multi-hop Reading Comprehension (RC)? The current strategies of identifying supporting sentences can be seen as an extractive question-focused summarization of the input text. However, these extractive explanations are not necessarily concise i.e. not minimally sufficient for answering a question. Instead, we advocate for an abstractive approach, where we propose to generate a question-focused, abstractive summary of input paragraphs and then feed it to an RC system. Given a limited amount of human-annotated abstractive explanations, we train the abstractive explainer in a semi-supervised manner, where we start from the supervised model and then train it further through trial and error maximizing a conciseness-promoted reward function. Our experiments demonstrate that the proposed abstractive explainer can generate more compact explanations than an extractive explainer with limited supervision (only 2k instances) while maintaining sufficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Naoya Inoue (29 papers)
  2. Harsh Trivedi (29 papers)
  3. Steven Sinha (1 paper)
  4. Niranjan Balasubramanian (53 papers)
  5. Kentaro Inui (119 papers)
Citations (13)