Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reason from Fallacy: Enhancing Large Language Models' Logical Reasoning through Logical Fallacy Understanding (2404.04293v1)

Published 4 Apr 2024 in cs.CL and cs.AI

Abstract: LLMs have demonstrated good performance in many reasoning tasks, but they still struggle with some complicated reasoning tasks including logical reasoning. One non-negligible reason for LLMs' suboptimal performance on logical reasoning is their overlooking of understanding logical fallacies correctly. To evaluate LLMs' capability of logical fallacy understanding (LFU), we propose five concrete tasks from three cognitive dimensions of WHAT, WHY, and HOW in this paper. Towards these LFU tasks, we have successfully constructed a new dataset LFUD based on GPT-4 accompanied by a little human effort. Our extensive experiments justify that our LFUD can be used not only to evaluate LLMs' LFU capability, but also to fine-tune LLMs to obtain significantly enhanced performance on logical reasoning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yanda Li (11 papers)
  2. Dixuan Wang (6 papers)
  3. Jiaqing Liang (62 papers)
  4. Guochao Jiang (12 papers)
  5. Qianyu He (26 papers)
  6. Yanghua Xiao (151 papers)
  7. Deqing Yang (55 papers)
Citations (4)