Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Does ChatGPT Have a Mind? (2407.11015v1)

Published 27 Jun 2024 in cs.CL and cs.AI

Abstract: This paper examines the question of whether LLMs like ChatGPT possess minds, focusing specifically on whether they have a genuine folk psychology encompassing beliefs, desires, and intentions. We approach this question by investigating two key aspects: internal representations and dispositions to act. First, we survey various philosophical theories of representation, including informational, causal, structural, and teleosemantic accounts, arguing that LLMs satisfy key conditions proposed by each. We draw on recent interpretability research in machine learning to support these claims. Second, we explore whether LLMs exhibit robust dispositions to perform actions, a necessary component of folk psychology. We consider two prominent philosophical traditions, interpretationism and representationalism, to assess LLM action dispositions. While we find evidence suggesting LLMs may satisfy some criteria for having a mind, particularly in game-theoretic environments, we conclude that the data remains inconclusive. Additionally, we reply to several skeptical challenges to LLM folk psychology, including issues of sensory grounding, the "stochastic parrots" argument, and concerns about memorization. Our paper has three main upshots. First, LLMs do have robust internal representations. Second, there is an open question to answer about whether LLMs have robust action dispositions. Third, existing skeptical challenges to LLM representation do not survive philosophical scrutiny.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Simon Goldstein (3 papers)
  2. Benjamin A. Levinstein (2 papers)
Citations (2)

Summary

An Analysis of "Does ChatGPT Have a Mind?"

The paper "Does ChatGPT Have a Mind?" by Simon Goldstein and B.A. Levinstein provides a comprehensive analysis of whether LLMs such as ChatGPT can be said to possess minds. Central to this question is whether these models have a folk psychology consisting of beliefs, desires, and intentions. The authors undertake this investigation by examining internal representations and dispositional actions, supporting their claims with philosophical theories and machine learning interpretability research.

The authors begin by surveying philosophical frameworks of representation—informational, causal, structural, and teleosemantic—to argue that LLMs satisfy essential conditions for mental representation according to each theory. For instance, they note that informational theories require that states carry probabilistic information, and recent AI probing techniques demonstrate this capacity in LLMs. They reference experiments, such as those involving Othello-GPT, where LLMs trained on game moves appear to internally represent the state of the game board, as verified through causal intervention techniques.

A significant part of the paper addresses challenges to LLM representation: sensory grounding, stochastic parrots, and memorization. The sensory grounding argument asserts that purely text-based LLMs lack a connection to the external world, thereby precluding genuine representation. Goldstein and Levinstein counter this by suggesting that LLMs could form hypotheses about the environment based on text, another form of causal connection supporting representation.

The stochastic parrot argument questions whether LLMs truly understand or merely mimic patterns, given their primary task is predicting text. Here, the authors argue that more complex behaviors could emerge as side effects of the prediction task, such as forming meaningful internal world models. Furthermore, empirical observations of few-shot and in-context learning, as well as successful transfer learning across domains, suggest that LLMs possess emergent reasoning capabilities.

The authors also examine skepticism rooted in memorization, suggesting that LLMs demonstrate substantial generalization beyond memorized data. They use the example of modular arithmetic learning in LLMs where initial memorization gives way to implementing generalizable algorithms.

The second central question is whether LLMs have dispositions to act, which is crucial for claiming a folk psychology. The analysis includes interpretationalist perspectives, such as Dennett's intentional stance, and representationalist perspectives, like those from Fodor and Dretske. These discussions largely depend on whether LLM behaviors can be viewed as goal-oriented or whether they derive merely from computational processes mirroring human-like interaction.

Ultimately, the authors conclude that while there is strong evidence for the presence of internal representations in LLMs, the evidence for universal beliefs, desires, and intentions is less clear-cut, due largely to issues of behavioral consistency and goal representation.

This paper contributes to both practical and theoretical developments. Practically, it highlights the importance of AI transparency and interpretability for detecting and improving folk psychological attributes in models. Theoretically, it urges further exploration of non-standard causal and teleosemantic theories and their implications for LLM capabilities. In sum, this paper represents a detailed philosophical and technical inquiry into the cognitive attributes of LLMs, challenging existing skepticism and laying a foundation for future research in AI cognition and philosophy of mind.

Youtube Logo Streamline Icon: https://streamlinehq.com

HackerNews

  1. Does ChatGPT Have a Mind? (1 point, 0 comments)