Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Complex Reading Comprehension Through Question Decomposition (2211.03277v1)

Published 7 Nov 2022 in cs.CL

Abstract: Multi-hop reading comprehension requires not only the ability to reason over raw text but also the ability to combine multiple evidence. We propose a novel learning approach that helps LLMs better understand difficult multi-hop questions and perform "complex, compositional" reasoning. Our model first learns to decompose each multi-hop question into several sub-questions by a trainable question decomposer. Instead of answering these sub-questions, we directly concatenate them with the original question and context, and leverage a reading comprehension model to predict the answer in a sequence-to-sequence manner. By using the same LLM for these two components, our best seperate/unified t5-base variants outperform the baseline by 7.2/6.1 absolute F1 points on a hard subset of DROP dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Xiao-Yu Guo (25 papers)
  2. Yuan-Fang Li (90 papers)
  3. Gholamreza Haffari (141 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.