Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reliable Evaluations for Natural Language Inference based on a Unified Cross-dataset Benchmark (2010.07676v1)

Published 15 Oct 2020 in cs.CL and cs.AI

Abstract: Recent studies show that crowd-sourced Natural Language Inference (NLI) datasets may suffer from significant biases like annotation artifacts. Models utilizing these superficial clues gain mirage advantages on the in-domain testing set, which makes the evaluation results over-estimated. The lack of trustworthy evaluation settings and benchmarks stalls the progress of NLI research. In this paper, we propose to assess a model's trustworthy generalization performance with cross-datasets evaluation. We present a new unified cross-datasets benchmark with 14 NLI datasets, and re-evaluate 9 widely-used neural network-based NLI models as well as 5 recently proposed debiasing methods for annotation artifacts. Our proposed evaluation scheme and experimental baselines could provide a basis to inspire future reliable NLI research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Guanhua Zhang (24 papers)
  2. Bing Bai (39 papers)
  3. Jian Liang (162 papers)
  4. Kun Bai (24 papers)
  5. Conghui Zhu (20 papers)
  6. Tiejun Zhao (70 papers)