Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GLoRE: Evaluating Logical Reasoning of Large Language Models (2310.09107v1)

Published 13 Oct 2023 in cs.CL and cs.AI

Abstract: Recently, LLMs, including notable models such as GPT-4 and burgeoning community models, have showcased significant general language understanding abilities. However, there has been a scarcity of attempts to assess the logical reasoning capacities of these LLMs, an essential facet of natural language understanding. To encourage further investigation in this area, we introduce GLoRE, a meticulously assembled General Logical Reasoning Evaluation benchmark comprised of 12 datasets that span three different types of tasks. Our experimental results show that compared to the performance of human and supervised fine-tuning, the logical reasoning capabilities of open LLM models necessitate additional improvement; ChatGPT and GPT-4 show a strong capability of logical reasoning, with GPT-4 surpassing ChatGPT by a large margin. We propose a self-consistency probing method to enhance the accuracy of ChatGPT and a fine-tuned method to boost the performance of an open LLM. We release the datasets and evaluation programs to facilitate future research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhiyang Teng (26 papers)
  2. Ruoxi Ning (4 papers)
  3. Jian Liu (404 papers)
  4. Qiji Zhou (8 papers)
  5. Yue Zhang (618 papers)
  6. Hanmeng Liu (11 papers)
Citations (4)
Github Logo Streamline Icon: https://streamlinehq.com