Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reading Comprehension as Natural Language Inference: A Semantic Analysis (2010.01713v1)

Published 4 Oct 2020 in cs.CL, cs.AI, and cs.LG

Abstract: In the recent past, Natural language Inference (NLI) has gained significant attention, particularly given its promise for downstream NLP tasks. However, its true impact is limited and has not been well studied. Therefore, in this paper, we explore the utility of NLI for one of the most prominent downstream tasks, viz. Question Answering (QA). We transform the one of the largest available MRC dataset (RACE) to an NLI form, and compare the performances of a state-of-the-art model (RoBERTa) on both these forms. We propose new characterizations of questions, and evaluate the performance of QA and NLI models on these categories. We highlight clear categories for which the model is able to perform better when the data is presented in a coherent entailment form, and a structured question-answer concatenation form, respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Anshuman Mishra (5 papers)
  2. Dhruvesh Patel (8 papers)
  3. Aparna Vijayakumar (2 papers)
  4. Xiang Li (1003 papers)
  5. Pavan Kapanipathi (35 papers)
  6. Kartik Talamadupula (38 papers)
Citations (8)