Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 88 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 175 tok/s Pro
GPT OSS 120B 450 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

A Circuit Complexity Formulation of Algorithmic Information Theory (2306.14087v1)

Published 25 Jun 2023 in cs.LG and cs.CC

Abstract: Inspired by Solomonoffs theory of inductive inference, we propose a prior based on circuit complexity. There are several advantages to this approach. First, it relies on a complexity measure that does not depend on the choice of UTM. There is one universal definition for Boolean circuits involving an universal operation such as nand with simple conversions to alternative definitions such as and, or, and not. Second, there is no analogue of the halting problem. The output value of a circuit can be calculated recursively by computer in time proportional to the number of gates, while a short program may run for a very long time. Our prior assumes that a Boolean function, or equivalently, Boolean string of fixed length, is generated by some Bayesian mixture of circuits. This model is appropriate for learning Boolean functions from partial information, a problem often encountered within machine learning as "binary classification." We argue that an inductive bias towards simple explanations as measured by circuit complexity is appropriate for this problem.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (11)
  1. R.J. Solomonoff “A formal theory of inductive inference. Part I” In Information and Control 7.1, 1964, pp. 1–22 DOI: https://doi.org/10.1016/S0019-9958(64)90223-2
  2. Leonard Pitt and Leslie G. Valiant “Computational Limitations on Learning from Examples” In J. ACM 35.4 New York, NY, USA: Association for Computing Machinery, 1988, pp. 965–984 DOI: 10.1145/48014.63140
  3. Radford Neal “Bayesian Learning via Stochastic Dynamics” In Advances in Neural Information Processing Systems 5 Morgan-Kaufmann, 1992 URL: https://proceedings.neurips.cc/paper_files/paper/1992/file/f29c21d4897f78948b91f03172341b7b-Paper.pdf
  4. “Being Bayesian about Network Structure” In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, UAI’00 Stanford, California: Morgan Kaufmann Publishers Inc., 2000, pp. 201–210
  5. Eric Allender “When Worlds Collide: Derandomization, Lower Bounds, and Kolmogorov Complexity” In FST TCS 2001: Foundations of Software Technology and Theoretical Computer Science Berlin, Heidelberg: Springer Berlin Heidelberg, 2001, pp. 1–15
  6. Jürgen Schmidhuber “The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions” In Computational Learning Theory Berlin, Heidelberg: Springer Berlin Heidelberg, 2002, pp. 216–228
  7. David Heckerman, Christopher Meek and Gregory Cooper “A Bayesian Approach to Causal Discovery” In Computation, Causation, and Discovery 19, 2006, pp. 1–28 DOI: 10.1007/3-540-33486-6_1
  8. “An introduction to Kolmogorov complexity and its applications” Springer, 2008
  9. “A Philosophical Treatise of Universal Induction” In Entropy 13.6, 2011, pp. 1076–1136 DOI: 10.3390/e13061076
  10. John Von Neumann “The general and logical theory of automata” In Systems Research for Behavioral Sciencesystems Research Routledge, 2017, pp. 97–107
  11. Eric Allender “Vaughan Jones, Kolmogorov Complexity, and the New Complexity Landscape around Circuit Minimization” In New Zealand Journal of Mathematics 52, 2021, pp. 585–604 DOI: 10.53733/148
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube