Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neuro-symbolic Natural Logic with Introspective Revision for Natural Language Inference (2203.04857v2)

Published 9 Mar 2022 in cs.CL

Abstract: We introduce a neuro-symbolic natural logic framework based on reinforcement learning with introspective revision. The model samples and rewards specific reasoning paths through policy gradient, in which the introspective revision algorithm modifies intermediate symbolic reasoning steps to discover reward-earning operations as well as leverages external knowledge to alleviate spurious reasoning and training inefficiency. The framework is supported by properly designed local relation models to avoid input entangling, which helps ensure the interpretability of the proof paths. The proposed model has built-in interpretability and shows superior capability in monotonicity inference, systematic generalization, and interpretability, compared to previous models on the existing datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yufei Feng (18 papers)
  2. Xiaoyu Yang (85 papers)
  3. Xiaodan Zhu (94 papers)
  4. Michael Greenspan (30 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.