2000 character limit reached
What is an "Abstract Reasoner"? Revisiting Experiments and Arguments about Large Language Models (2507.22457v1)
Published 30 Jul 2025 in cs.CL and cs.AI
Abstract: Recent work has argued that LLMs are not "abstract reasoners", citing their poor zero-shot performance on a variety of challenging tasks as evidence. We revisit these experiments in order to add nuance to the claim. First, we show that while LLMs indeed perform poorly in a zero-shot setting, even tuning a small subset of parameters for input encoding can enable near-perfect performance. However, we also show that this finetuning does not necessarily transfer across datasets. We take this collection of empirical results as an invitation to (re-)open the discussion of what it means to be an "abstract reasoner", and why it matters whether LLMs fit the bill.