Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 52 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 100 tok/s Pro
Kimi K2 192 tok/s Pro
GPT OSS 120B 454 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Talking About Large Language Models (2212.03551v5)

Published 7 Dec 2022 in cs.CL and cs.LG

Abstract: Thanks to rapid progress in artificial intelligence, we have entered an era when technology and philosophy intersect in interesting ways. Sitting squarely at the centre of this intersection are LLMs. The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are. This trend is amplified by the natural tendency to use philosophically loaded terms, such as "knows", "believes", and "thinks", when describing these systems. To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work. The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence, both within the field and in the public sphere.

Citations (183)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper presents a critical analysis of how scaling LLMs improves performance while cautioning against anthropomorphism.
  • It highlights the misleading use of terms like 'knows' or 'believes' in describing LLMs, advocating for clearer scientific language.
  • The study discusses future AI designs that integrate external reality to enhance trust and accuracy in system capabilities.

Examination of "Talking About LLMs"

Murray Shanahan's paper explores the intersection of technology and philosophy, centered around the capabilities of LLMs. The paper presents a critical analysis of our anthropomorphic tendencies when engaging with LLMs, and it argues for a more scientifically precise discourse, aiming to refine philosophical understanding in the field of AI.

Core Insights

The paper underscores several key observations about LLMs:

  1. Scaling and Performance: LLMs, like GPT-3 and others, demonstrate improved performance as the size of the training dataset and model increase. This scalability also leads to qualitative leaps in capabilities, bringing human-like language mimicry into sharper focus.
  2. Mimicking Human Language: Shanahan discusses the anthropomorphic trap we fall into when LLMs deliver human-like responses. While humans intuitively assign human traits to LLMs, the underlying operations of these models remain fundamentally mechanical — predicting statistical continuations of token sequences.
  3. Intentional Stance: The paper evaluates the use of familiar psychological terms such as "knows" or "believes" when describing LLMs. Such language, while convenient, can mislead and encourage overly human-like perceptions of AI functionality.
  4. External Reality and Truth: Shanahan argues that LLMs lack a mechanism to truly "know" or "believe," as they cannot engage with external reality or distinguish truth in human terms. The lack of such mechanisms necessitates caution in linguistic ascriptions of understanding or belief to LLMs.

Implications and Future Directions

The implications of Shanahan’s arguments are both theoretical and practical:

  • Theoretical Refinement: The paper pushes for refinement in how we philosophically and linguistically frame AI systems. It suggests developing language that accurately reflects the capabilities and limitations of LLMs without falling into the trap of anthropomorphism.
  • Policy and Communication: For policymakers and the broader public, avoiding misleading representations of AI capabilities becomes essential to crafting reasonable expectations and regulations.
  • System Design and Trust: Embedding LLMs within larger systems that utilize factual external resources could move us closer to agents that display a form of "belief." However, this hinges on careful system design and the integration of robust mechanisms for interacting with reality.

Future Developments in AI

Anticipated future developments might include:

  • Advanced Embodiment: Future systems may see LLMs integrated into embodied agents capable of more interactive and meaningful engagements with their environment.
  • Enhanced Trust Mechanisms: Developing methods that ensure AI systems are faithful in executing logic-based tasks could augment trust in AI applications, potentially bridging the gap between artificial reasoning and human-like understanding.
  • Evolving Language Frameworks: With the continued assimilation of AI into human contexts, language describing AI capabilities may evolve, possibly introducing bespoke terminology that suits AI’s unique mechanics.

In "Talking About LLMs," Shanahan offers a precise examination of LLMs, recommending caution against anthropomorphism and urging clarity in AI discourse. This approach lays a foundation for more nuanced interactions with AI, fostering better understanding and trust between humans and machines.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

Youtube Logo Streamline Icon: https://streamlinehq.com

HackerNews