Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language models show human-like content effects on reasoning tasks (2207.07051v4)

Published 14 Jul 2022 in cs.CL, cs.AI, and cs.LG
Language models show human-like content effects on reasoning tasks

Abstract: Reasoning is a key ability for an intelligent system. LLMs (LMs) achieve above-chance performance on abstract reasoning tasks, but exhibit many imperfections. However, human abstract reasoning is also imperfect. For example, human reasoning is affected by our real-world knowledge and beliefs, and shows notable "content effects"; humans reason more reliably when the semantic content of a problem supports the correct logical inferences. These content-entangled reasoning patterns play a central role in debates about the fundamental nature of human intelligence. Here, we investigate whether LLMs $\unicode{x2014}$ whose prior expectations capture some aspects of human knowledge $\unicode{x2014}$ similarly mix content into their answers to logical problems. We explored this question across three logical reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task. We evaluate state of the art LLMs, as well as humans, and find that the LLMs reflect many of the same patterns observed in humans across these tasks $\unicode{x2014}$ like humans, models answer more accurately when the semantic content of a task supports the logical inferences. These parallels are reflected both in answer patterns, and in lower-level features like the relationship between model answer distributions and human response times. Our findings have implications for understanding both these cognitive effects in humans, and the factors that contribute to LLM performance.

Overview of Human-Like Content Effects in LLMs on Reasoning Tasks

The paper "LLMs show human-like content effects on reasoning tasks" presents an extensive investigation into the performance of LLMs (LMs) on a series of logical reasoning tasks, illuminating how these models display parallel content effects to humans. The authors focus on three specific reasoning tasks: natural language inference (NLI), syllogism validity judgment, and the Wason selection task, exploring how the semantic content of the tasks affects the performance of both humans and LLMs. This exploration sheds light on how LMs mirror human cognitive patterns, particularly the tendency to integrate semantic and contextual knowledge into logical reasoning.

Key Findings

The paper reveals several pivotal points about the capability and limitations of LMs in logical reasoning:

  1. Content Effects on LM and Human Reasoning: The research establishes that LMs, akin to human reasoners, exhibit significant content effects in logical reasoning. These models perform better on tasks when the semantic content aligns with realistic and believable scenarios, mirroring human reasoning biases.
  2. Task-Specific Observations:
    • Natural Language Inference: On NLI tasks, both LMs and humans display high accuracy, indicating relatively minor content effects due to the straightforward logic involved.
    • Syllogisms: This task illustrates more pronounced content effects, where both LMs and humans are biased by the believability of the conclusions. In particular, they tend to affirm the validity of syllogisms when the conclusions are consistent with their beliefs.
    • Wason Selection Task: This classical logic task presents more challenges. Both humans and LLMs exhibit low performance, with individuals and models faring significantly better when the task content is framed in realistic terms.
  3. Confidence and Response Time Correlations: The authors delve into the confidence levels of LMs as expressed through log probabilities, finding notable correlations between model "confidence" and human response times. For instance, models express higher confidence (higher log-probability differences) when they, akin to humans, respond more quickly to a problem or when the logical inferences align with prior knowledge.
  4. Instruction-Tuning and Model Variability: Interestingly, instruct-tuned models did not show marked improvements in overcoming content effects compared to their base counterparts. However, model size and structure (e.g., PaLM 2 variants, GPT-3.5) influence overall task performance, suggesting an avenue for further exploration in model refinement for logical consistency.

Implications

This paper's findings have significant implications for both scientific understanding of human cognition and the practical development of AI:

  • Cognitive Science Insight: The parallels between LM biases and human biases invite discussions on the cognitive mechanisms shared between learned statistical representations in humans and neural networks. Such insights encourage a deeper examination of the shared characteristics in emergent problem-solving strategies across biological and artificial intelligences.
  • AI Development: Practically, understanding these content effects can guide enhancements in the training paradigms of LMs, particularly through targeted interventions such as exposure to formal reasoning tasks. Moreover, addressing these biases could inform AI applications in contexts requiring robust logical reasoning capabilities, potentially mitigating weak points in AI decision-making systems.

Future Directions

The results suggest numerous pathways for future research:

  • Further studies should aim to pinpoint specific factors within the training datasets that lead to content effects in LMs, providing deeper insights into the learning processes of both humans and machines.
  • Exploring the potential for an evolved model architecture that can innately balance between heuristic reasoning (system 1) and more deliberative, symbolic reasoning (system 2) could enhance models' reasoning abilities, paralleling cognitive frameworks like dual-process theories in humans.
  • Investigating interdisciplinary approaches to leverage these insights in the design of next-generation AI, possibly integrating learned semantic knowledge with more explicit symbolic reasoning mechanisms, could yield AI systems with superior reasoning skills.

In conclusion, this paper underscores the nuanced alignment of LMs with human cognitive patterns, notably in logical reasoning affected by semantic content. This revelation not only broadens our understanding of AI capabilities but also enriches the discourse on the interplay between human-like inference patterns and artificial intelligence systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Ishita Dasgupta (35 papers)
  2. Andrew K. Lampinen (24 papers)
  3. Stephanie C. Y. Chan (20 papers)
  4. Hannah R. Sheahan (2 papers)
  5. Antonia Creswell (21 papers)
  6. Dharshan Kumaran (9 papers)
  7. James L. McClelland (18 papers)
  8. Felix Hill (52 papers)
Citations (163)
Youtube Logo Streamline Icon: https://streamlinehq.com