Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 152 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 94 tok/s Pro
Kimi K2 212 tok/s Pro
GPT OSS 120B 430 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Large Language Models and Emergence: A Complex Systems Perspective (2506.11135v1)

Published 10 Jun 2025 in cs.CL, cs.AI, cs.LG, and cs.NE

Abstract: Emergence is a concept in complexity science that describes how many-body systems manifest novel higher-level properties, properties that can be described by replacing high-dimensional mechanisms with lower-dimensional effective variables and theories. This is captured by the idea "more is different". Intelligence is a consummate emergent property manifesting increasingly efficient -- cheaper and faster -- uses of emergent capabilities to solve problems. This is captured by the idea "less is more". In this paper, we first examine claims that LLMs exhibit emergent capabilities, reviewing several approaches to quantifying emergence, and secondly ask whether LLMs possess emergent intelligence.

Summary

  • The paper demonstrates that emergent capabilities in LLMs arise from scaling effects and the discovery of internal coarse-grained representations.
  • It uses a complex systems framework with concepts like scaling, compression, and novel bases to analyze model behavior and performance.
  • It distinguishes between emergent capability, akin to highly engineered functions, and genuine emergent intelligence characterized by abstraction and analogy-making.

LLMs and Emergence: A Complex Systems Perspective

Introduction

The paper "LLMs and Emergence: A Complex Systems Perspective" explores the notion of emergence, particularly in the context of LLMs. Emergence in complexity science refers to novel higher-level properties in many-body systems arising from lower-dimensional effective variables. The paper scrutinizes claims regarding emergent capabilities in LLMs and explores whether these models possess emergent intelligence. The authors argue that while LLMs exhibit emergent capabilities, the term "emergence" should be reserved for instances where new internal coarse-grained representations within the neural network underpin successful task performance.

Claims of Emergence in LLMs

Emergence in LLMs has been primarily associated with sudden increases in accuracy tied to the scaling of network and data sizes. Notably, Wei et al. introduced the concept of emergent abilities, characterized by unexpected enhancements not present in smaller models. The paper addresses controversies surrounding the sharpness of these improvements and the generalization of emergent capabilities attributed to in-context learning and instruction tuning.

In evaluating these claims, it is crucial to distinguish between emergent capabilities and intelligence. The authors argue that while LLMs demonstrate emergent capabilities akin to highly engineered functions in calculators, they lack the simple modification and analogy-making mechanisms that underpin emergent intelligence in humans. This distinction underscores the need for a nuanced understanding of emergence in LLMs.

Emergence Framework

A comprehensive emergence framework, encompassing scaling, criticality, compression, novel bases, and generalization, is pivotal in situating LLM emergence. The paper contrasts "knowledge-out" (KO) emergence, wherein simpler components yield complex properties, with "knowledge-in" (KI) emergence arising from intricate inputs or environments. LLMs are posited as KI systems, necessitating evidence of causal structures that support new capabilities.

Scaling phenomena in LLMs, illustrated by doubling descent behavior, challenge traditional emergence notions. While scaling up parameters or data may lead to abrupt improvements, the presence of inherent coarse-graining required for genuine emergence remains contentious. Additionally, though compression and internal representation discoveries have shown promise, the authors emphasize the need for further exploration to establish concrete internal structures that validate emergence claims.

Distinguishing Between Emergent Capability and Intelligence

The paper asserts the importance of differentiating between emergent capabilities and emergent intelligence. Intelligence involves not only problem-solving abilities but also a capacity for abstraction, parsimony, and analogy-making. LLMs possess expansive capabilities, yet their operational nature entails overparameterized systems akin to calculation rather than emergent intelligence. Emergent intelligence stands as a hallmark of human cognition, where minimal energy expenditure yields maximum understanding.

Conclusions

The paper posits that the emergence in LLMs is predominantly linked to capability rather than intelligence. Emergent capabilities, while notable, should not be conflated with genuine emergent intelligence. The potential of LLMs lies in harnessing inherent properties through scaling, compression, and discovery of novel bases. Establishing more rigorous criteria for LLM emergence involves recognizing the structured mechanisms and coarse-grained variables that underpin sophistication and generalization in LLMs.

Ultimately, the pursuit of emergent intelligence in LLMs offers a captivating frontier, demanding meticulous validation through the lens of complexity science. The outlined principles and insights pave the way for refinements in understanding emergence within complex AI systems, advancing both theoretical and practical implications.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 5 tweets and received 8 likes.

Upgrade to Pro to view all of the tweets about this paper:

Youtube Logo Streamline Icon: https://streamlinehq.com