Dice Question Streamline Icon: https://streamlinehq.com

Do large language models reason in a human-like manner?

Determine whether large language models reason in a genuinely human-like manner.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper investigates whether LLMs align with human behavior and neurocognition during abstract reasoning, motivated by rapid advances in LLM capabilities and ongoing debate about the nature of their reasoning. Although LLMs achieve high performance on many tasks, it remains unresolved whether their internal processes constitute human-like reasoning rather than pattern matching.

To address this, the authors compare eight open-source LLMs with human participants on an abstract pattern completion task, analyzing both behavioral performance and neural signatures via fixation-related potentials (FRPs) from EEG. They find that only the largest models approach human accuracy and that intermediate model layers exhibit pattern-specific clustering that modestly correlates with human frontal FRPs, suggesting preliminary alignment but leaving open the broader question of whether LLMs truly reason like humans.

References

Yet whether these models reason in a genuinely human‑like manner remains an open question.

Large Language Models Show Signs of Alignment with Human Neurocognition During Abstract Reasoning (2508.10057 - Pinier et al., 12 Aug 2025) in Introduction, opening paragraph (page 1)