Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can neural networks understand monotonicity reasoning? (1906.06448v2)

Published 15 Jun 2019 in cs.CL

Abstract: Monotonicity reasoning is one of the important reasoning skills for any intelligent natural language inference (NLI) model in that it requires the ability to capture the interaction between lexical and syntactic structures. Since no test set has been developed for monotonicity reasoning with wide coverage, it is still unclear whether neural models can perform monotonicity reasoning in a proper way. To investigate this issue, we introduce the Monotonicity Entailment Dataset (MED). Performance by state-of-the-art NLI models on the new test set is substantially worse, under 55%, especially on downward reasoning. In addition, analysis using a monotonicity-driven data augmentation method showed that these models might be limited in their generalization ability in upward and downward reasoning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hitomi Yanaka (29 papers)
  2. Koji Mineshima (20 papers)
  3. Daisuke Bekki (15 papers)
  4. Kentaro Inui (119 papers)
  5. Satoshi Sekine (11 papers)
  6. Lasha Abzianidze (16 papers)
  7. Johan Bos (27 papers)
Citations (76)
Github Logo Streamline Icon: https://streamlinehq.com