Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Looking Beyond Sentence-Level Natural Language Inference for Downstream Tasks (2009.09099v1)

Published 18 Sep 2020 in cs.CL, cs.AI, and cs.LG

Abstract: In recent years, the Natural Language Inference (NLI) task has garnered significant attention, with new datasets and models achieving near human-level performance on it. However, the full promise of NLI -- particularly that it learns knowledge that should be generalizable to other downstream NLP tasks -- has not been realized. In this paper, we study this unfulfilled promise from the lens of two downstream tasks: question answering (QA), and text summarization. We conjecture that a key difference between the NLI datasets and these downstream tasks concerns the length of the premise; and that creating new long premise NLI datasets out of existing QA datasets is a promising avenue for training a truly generalizable NLI model. We validate our conjecture by showing competitive results on the task of QA and obtaining the best reported results on the task of Checking Factual Correctness of Summaries.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Anshuman Mishra (5 papers)
  2. Dhruvesh Patel (8 papers)
  3. Aparna Vijayakumar (2 papers)
  4. Xiang Li (1002 papers)
  5. Pavan Kapanipathi (35 papers)
  6. Kartik Talamadupula (38 papers)
Citations (6)