Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Repurposing Entailment for Multi-Hop Question Answering Tasks (1904.09380v1)

Published 20 Apr 2019 in cs.CL, cs.AI, and cs.LG

Abstract: Question Answering (QA) naturally reduces to an entailment problem, namely, verifying whether some text entails the answer to a question. However, for multi-hop QA tasks, which require reasoning with multiple sentences, it remains unclear how best to utilize entailment models pre-trained on large scale datasets such as SNLI, which are based on sentence pairs. We introduce Multee, a general architecture that can effectively use entailment models for multi-hop QA tasks. Multee uses (i) a local module that helps locate important sentences, thereby avoiding distracting information, and (ii) a global module that aggregates information by effectively incorporating importance weights. Importantly, we show that both modules can use entailment functions pre-trained on a large scale NLI datasets. We evaluate performance on MultiRC and OpenBookQA, two multihop QA datasets. When using an entailment function pre-trained on NLI datasets, Multee outperforms QA models trained only on the target QA datasets and the OpenAI transformer models. The code is available at https://github.com/StonyBrookNLP/multee.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Harsh Trivedi (29 papers)
  2. Heeyoung Kwon (8 papers)
  3. Tushar Khot (53 papers)
  4. Ashish Sabharwal (84 papers)
  5. Niranjan Balasubramanian (53 papers)
Citations (45)

Summary

We haven't generated a summary for this paper yet.