Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Is Complexity an Illusion? (2404.07227v4)

Published 31 Mar 2024 in cs.AI

Abstract: Simplicity is held by many to be the key to general intelligence. Simpler models tend to "generalise", identifying the cause or generator of data with greater sample efficiency. The implications of the correlation between simplicity and generalisation extend far beyond computer science, addressing questions of physics and even biology. Yet simplicity is a property of form, while generalisation is of function. In interactive settings, any correlation between the two depends on interpretation. In theory there could be no correlation and yet in practice, there is. Previous theoretical work showed generalisation to be a consequence of "weak" constraints implied by function, not form. Experiments demonstrated choosing weak constraints over simple forms yielded a 110-500% improvement in generalisation rate. Here we show that all constraints can take equally simple forms, regardless of weakness. However if forms are spatially extended, then function is represented using a finite subset of forms. If function is represented using a finite subset of forms, then we can force a correlation between simplicity and generalisation by making weak constraints take simple forms. If function is determined by a goal directed process that favours versatility (e.g. natural selection), then efficiency demands weak constraints take simple forms. Complexity has no causal influence on generalisation, but appears to due to confounding.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. Owen Maroney “Information Processing and Thermodynamic Entropy” In The Stanford Encyclopedia of Philosophy Metaphysics Research Lab, Stanford University, https://plato.stanford.edu/archives/fall2009/entries/information-entropy/, 2009
  2. A.N. Kolmogorov “On tables of random numbers” In Sankhya: The Indian Journal of Statistics A, 1963, pp. 369–376
  3. Jorma Rissanen “Modeling By Shortest Data Description*” In Autom. 14, 1978, pp. 465–471
  4. Michael F. Barnsley “Chapter I - Introduction” In Fractals Everywhere (Second Edition) Academic Press, 1993, pp. 1–4
  5. Amanda Gefter “Theoretical physics: Complexity on the horizon” In Nature 509.7502, 2014, pp. 552–553 DOI: 10.1038/509552a
  6. Leonard Susskind “Computational Complexity and Black Hole Horizons” In Fortsch. Phys. 64, 2016, pp. 24–43
  7. Francis Heylighen “The meaning and origin of goal-directedness: a dynamical systems perspective” In Biological Journal of the Linnean Society 139.4, 2022, pp. 370–387
  8. Francis Heylighen “Complexity and Self-organization” In Encyclopedia of Library and Information Sciences, 2008
  9. “Language Modeling Is Compression” In The Twelfth International Conference on Learning Representations, 2024
  10. Marcus Hutter “Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability” Berlin, Heidelberg: Springer-Verlag, 2010
  11. Ray Solomonoff “Complexity-based induction systems: Comparisons and convergence theorems” In IEEE Transactions on Information Theory 24.4, 1978, pp. 422–432
  12. Elliott Sober “Ockham’s Razors: A User’s Manual” Cambridge Uni. Press, 2015
  13. “Universal Intelligence: A Definition of Machine Intelligence” In Minds and Machines 17.4 Springer, 2007, pp. 391–444 DOI: 10.1007/s11023-007-9079-x
  14. “Bad Universal Priors and Notions of Optimality” In Proceedings of The 28th COLT, PMLR, 2015, pp. 1244–1259
  15. Michael Timothy Bennett “Emergent Causality and the Foundation of Consciousness” In Artificial General Intelligence Cham: Springer Nature Switzerland, 2023, pp. 52–61
  16. François Chollet “On the Measure of Intelligence” arXiv, 2019
  17. Michael Timothy Bennett “The Optimal Choice of Hypothesis Is the Weakest, Not the Shortest” In Artificial General Intelligence Cham: Springer Nature Switzerland, 2023, pp. 42–51
  18. Michael Timothy Bennett “Computable Artificial General Intelligence” In Manuscript, 2022
  19. Michael Timothy Bennett “On the Computation of Meaning, Language Models and Incomprehensible Horrors”, 2023
  20. Gualtiero Piccinini “Physical Computation: A Mechanistic Account” Oxford University Press, 2015
  21. “Finite Automata and Their Decision Problems” In IBM Journal of Research and Development 3.2, 1959, pp. 114–125
  22. Hilary Putnam “Psychological Predicates” In Art, mind, and religion Uni. of Pittsburgh Press, 1967, pp. 37–48
  23. Jacob D. Bekenstein “Universal upper bound on the entropy-to-energy ratio for bounded systems” In Phys. Rev. D 23 American Physical Society, 1981, pp. 287–298 DOI: 10.1103/PhysRevD.23.287
  24. “The Book of Why: The New Science of Cause and Effect” New York: Basic Books, Inc., 2018
  25. Anna Ciaunica, Evgeniya V. Shmeleva and Michael Levin “The brain is not mental! coupling neuronal and immune cellular processing in human organisms” In Frontiers in Integrative Neuroscience 17, 2023
  26. “Collective intelligence: A unifying concept for integrating biology across scales and substrates” In Communications Biology 7.1, 2024, pp. 378
  27. Michael Timothy Bennett “Symbol Emergence and the Solutions to Any Task” In Artificial General Intelligence Cham: Springer, 2022, pp. 30–40
Citations (2)

Summary

  • The paper introduces a formal framework that argues simplicity is an artifact of finite abstraction layers rather than a driver of effective generalization.
  • It demonstrates that policy weakness, shaped by limited vocabularies, is a stronger predictor of sample efficiency than apparent simplicity.
  • Empirical findings reveal that focusing on weak constraints can boost generalization performance by 110-500%, reshaping AI design principles.

Is Complexity an Illusion?

The paper "Is Complexity an Illusion?" by Michael Timothy Bennett investigates the relationship between simplicity, generalization, and complexity within the context of artificial intelligence and beyond. In this work, Bennett challenges the traditional notion that simplicity directly influences generalization performance, proposing that simplicity is merely a byproduct of abstraction layers and finite vocabularies. The author constructs a formal framework to explore these concepts and provides empirical results to support the argument.

Core Concepts and Definitions

The paper begins by establishing a set of foundational definitions and axioms. Complexity is dissected into a formal structure, grounded in minimal assumptions about the existence and variation of states within an environment. The author defines complexity through the lens of declarative programs and abstraction layers, where an abstraction layer is essentially a finite vocabulary that maps inputs to outputs in a task.

Axioms and Universality:

  1. Existence and Non-Existence: There are things that exist (the environment) and things that do not.
  2. State Differentiation: The environment has states differentiated along one or more dimensions, potentially representing time, space, or other dimensions.

Complexity and Generalization:

  1. Policy Weakness: The cardinality of an extension (akin to the set of all valid outcomes) measures a policy's weakness.
  2. Sample Efficiency: Related to how efficiently one can generalize from finite samples, measured in terms of proxy weaknesses.
  3. Subjectivity and Confounding: Complexity and simplicity are shown to be influenced by the abstraction layer and vocabulary used.

Key Propositions and Evidence

Proposition of Subjectivity:

Bennett argues that in the absence of an abstraction layer, the complexity of all behaviors is equal. The implication is that without predefined constraints, simplicity does not correlate with sample efficiency. This conclusion is drawn from a formal derivation showing that, without abstraction, any behavior (or policy) can be described by a single declarative program, rendering complexity an irrelevant measure.

Proposition of Confounding:

When finite vocabularies are assumed, policy weakness can confound simplicity and sample efficiency. Bennett illustrates that a vocabulary naturally enforces constraints that cause weaker policies to manifest as simpler forms. This inherent property of finite vocabularies creates an apparent correlation between simplicity and generalization efficiency, which is essentially an artifact of how abstraction layers are constructed.

Goal-Direction and Further Implications:

The paper argues that abstraction layers are inherently goal-directed, optimizing for efficiency within finite spatial and temporal bounds. This implies that abstraction layers minimize vocabulary size while maximizing policy weakness to ensure adaptability and tractability in real-world scenarios. This optimization is reflective in both natural and artificial systems, where layers are designed (or evolved) to satisfy these constraints.

Empirical Findings

Experimental Results:

Previous work cited in the paper supports the empirical basis for the theoretical assertions. Specifically, the studies have demonstrated that choosing policies based on weakness rather than simplicity can yield significant improvements in generalization rates, with improvements ranging from 110-500%. These results underscore the practical implications of the theoretical framework, suggesting that policy weakness (rather than simplicity) is a better predictor of performance in generalization tasks.

Implications and Speculation

Practical Implications:

The findings have profound implications for artificial intelligence, particularly in the design of learning algorithms and models. By focusing on weak constraints rather than simple forms, AI systems can achieve better sample efficiency, enhancing their ability to generalize from limited data. This shift in focus may inform the development of more robust and adaptable AI systems, facilitating their application across a broader range of environments.

Theoretical Implications:

Theoretically, the paper challenges the foundational principles of complexity in AI and cognitive science, suggesting a paradigm shift in understanding how systems generalize information. The conflation of simplicity with efficiency is deconstructed, presenting a nuanced view that considers the interplay between abstraction, vocabulary, and policy weakness.

Future Directions:

Future research could explore the dynamic evolution of abstraction layers in more complex and heterogeneous environments. Investigating how different types of abstraction layers influence learning and inference in multi-agent systems or in environments with non-stationary dynamics could yield further insights into the nature of complexity, simplicity, and generalization.

Conclusion

Michael Timothy Bennett's "Is Complexity an Illusion?" provides a rigorous, theoretically sound framework that questions traditional views on the relationship between simplicity and generalization. The paper's propositions and empirical findings advocate for a focus on policy weakness within finite vocabularies as a more effective proxy for sample efficiency. This work not only redefines theoretical perspectives but also offers practical guidance for improving AI systems, underscoring the importance of abstraction in shaping our understanding of complexity.

Youtube Logo Streamline Icon: https://streamlinehq.com

HackerNews

  1. Is Complexity an Illusion? (2 points, 1 comment)