Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Models as Inductive Reasoners (2212.10923v3)

Published 21 Dec 2022 in cs.CL and cs.AI

Abstract: Inductive reasoning is a core component of human intelligence. In the past research of inductive reasoning within computer science, formal language is used as representations of knowledge (facts and rules, more specifically). However, formal language can cause systematic problems for inductive reasoning such as disability of handling raw input such as natural language, sensitiveness to mislabeled data, and incapacity to handle ambiguous input. To this end, we propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts, and create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language. New automatic metrics are also proposed and analysed for the evaluation of this task. With DEER, we investigate a modern approach for inductive reasoning where we use natural language as representation for knowledge instead of formal language and use pretrained LLMs as ''reasoners''. Moreover, we provide the first and comprehensive analysis of how well pretrained LLMs can induce natural language rules from natural language facts. We also propose a new framework drawing insights from philosophy literature for this task, which we show in the experiment section that surpasses baselines in both automatic and human evaluations. We discuss about our future perspectives for inductive reasoning in Section 7. Dataset and code are available at https://github.com/ZonglinY/Inductive_Reasoning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zonglin Yang (21 papers)
  2. Li Dong (154 papers)
  3. Xinya Du (41 papers)
  4. Hao Cheng (190 papers)
  5. Erik Cambria (136 papers)
  6. Xiaodong Liu (162 papers)
  7. Jianfeng Gao (344 papers)
  8. Furu Wei (291 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.