Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension (2109.06853v1)
Abstract: How can we generate concise explanations for multi-hop Reading Comprehension (RC)? The current strategies of identifying supporting sentences can be seen as an extractive question-focused summarization of the input text. However, these extractive explanations are not necessarily concise i.e. not minimally sufficient for answering a question. Instead, we advocate for an abstractive approach, where we propose to generate a question-focused, abstractive summary of input paragraphs and then feed it to an RC system. Given a limited amount of human-annotated abstractive explanations, we train the abstractive explainer in a semi-supervised manner, where we start from the supervised model and then train it further through trial and error maximizing a conciseness-promoted reward function. Our experiments demonstrate that the proposed abstractive explainer can generate more compact explanations than an extractive explainer with limited supervision (only 2k instances) while maintaining sufficiency.
- Naoya Inoue (29 papers)
- Harsh Trivedi (29 papers)
- Steven Sinha (1 paper)
- Niranjan Balasubramanian (53 papers)
- Kentaro Inui (119 papers)