Papers
Topics
Authors
Recent
2000 character limit reached

Limits of Computation

Updated 9 December 2025
  • Limits of computation are the rigorous constraints derived from physical, mathematical, and architectural frameworks that define what computers can achieve.
  • Key frameworks such as quantum speed limits, thermodynamic bounds, and algorithmic complexity illustrate how resources like energy and memory limit computation.
  • This topic offers practical insights into the impact of these boundaries on optimization, simulation precision, and the structural design of both traditional and neural computing systems.

Limits of computation are demarcated by rigorous physical, mathematical, and architectural constraints on what can be accomplished by physical and abstract computing systems. These boundaries arise from the interplay between resource quantification, formal theories of algorithmic and dynamical complexity, and intrinsic features of logical and physical models. The domain encompasses quantum speed limits, energy and space-bounded theorems, computational barriers in models such as generalized probabilistic theories, non-locality versus computability, the spectrum between limit computability and undecidability, algorithmic hardness in optimization, and architectural barriers in neural systems (“split-brain syndrome”). The following sections elucidate the principal frameworks, highlight major results, and contextualize the landscape of known boundaries.

1. Physical Speed and Resource Constraints

The ultimate rate of computation and transformation in physical systems is determined by the structure of underlying symmetry groups, available resources (energy, control amplitude, bandwidth), and geometric constraints. In quantum systems, the evolution of a state is parameterized by curves in the Lie group SU(N)SU(N), with dynamics governed by the Hamiltonian H(t)H(t) and constraints captured by positive-homogeneous resource functionals FF on the associated Lie algebra su(N)\mathfrak{su}(N) (Russell et al., 2016).

A resource constraint F(iH(t))=κF(-iH(t)) = \kappa at all times yields a right-invariant action S[U()]=0TF(iH(t))dt=κTS[U(\cdot)] = \int_0^T F(-iH(t))\,dt = \kappa T, so time minimization becomes a geodesic problem in Finsler geometry. When FF is bi-invariant, the time-optimal paths are exponentials U(t)=exp(tX)U(t) = \exp(tX), and the minimum time to implement a gate UfU_f is given by

T=F(logUf)κ.T_* = \frac{F(\log U_f)}{\kappa}.

Special cases recover known quantum speed limits:

  • Margolus–Levitin bound: T=π2(EˉE0)T_* = \frac{\pi}{2(\bar E - E_0)}
  • Mandelstam–Tamm inequality: T=π2ΔET_* = \frac{\pi}{2\Delta E}

The geometric framework generalizes to arbitrary controlled dynamical systems, providing a physical basis for time complexity: every speed-limiting resource induces a Finsler geometry on the space of computational configurations, and optimal time evolution corresponds to geodesics under these constraints. Time-dependent controls do not beat constant controls in the bi-invariant case, underscoring the robustness of these limits (Russell et al., 2016).

2. Thermodynamic, Energy, and Spatial Limits

Fundamental constraints on energy, power, spatial density, and manufacturing underpin the computational ceiling of real devices (Markov, 2014, Earley, 2020). Essential principles encompass:

  • Landauer’s Principle: Each irreversible bit operation dissipates at least ΔEkBTln2\Delta E \ge k_B T \ln 2; modern devices operate 10410^410510^5 above this limit.
  • Margolus–Levitin Bound: Maximum operations per second scale as 2E/π2E/\pi\hbar for energy EE; real systems fall orders of magnitude short.
  • Bekenstein Bound & Bremermann’s Limit: Govern the maximal entropy and communication rate per mass.
  • Power Density Wall: Thermal management constrains active regions (“dark silicon”); only fractions of chips operate at peak throughput.
  • Abbe Diffraction & Atomic Scale: Manufacturing resolution limited to 0.2nm\sim 0.2\,\mathrm{nm}; actual transistor densities remain orders below physical limits.

Advanced bounds merge quantum, thermodynamic, and geometric constraints. For a region with volume VV and surface area AA, the maximal sustained computational rate scales as RmaxconstAVR_{\max} \lesssim \text{const}\cdot\sqrt{A V} in reversible implementations; irreversible schemes are surface-area limited RAR \propto A (Earley, 2020). Relativistic effects may introduce scalings in astrophysical cases, e.g. AR\sqrt{A R} for large radii.

Occupying these bounds requires homogeneity in architecture and temperature; inhomogeneities inject volumetric entropy production, reverting scaling to the inferior RAR\propto A law.

3. Computational Complexity in Abstract and Physical Theories

Abstract computational power is bounded by properties of the underlying model. In generalized probabilistic theories (GPTs), efficient computation is constrained by physically motivated assumptions (Lee et al., 2014):

  • Tomographic Locality: Ensures outcomes on composite systems are characterizable by local measurements.
  • Causality: Outcomes at earlier stages do not depend on future measurement choices.

Under just tomographic locality, the class of feasible computations in GPTs (BGP) embeds in AWPP. When extended with post-selection, the computational power equals PP. Even with classical oracles, NP-complete problems remain outside BGP for some oracles (NPA⊄BGPA\text{NP}^A \not\subset \text{BGP}^A) (Lee et al., 2014). Quantum theory remains maximal among reasonable operational theories; further extensions cannot collapse these boundaries.

4. Computability, Non-locality, and Undecidable Functions

Violation of computability manifests as a sharp limit on the realization of non-local correlations. If a function f:{0,1}n×{0,1}m{0,1}f:\{0,1\}^n\times\{0,1\}^m\to\{0,1\} is uncomputable, no no-signalling statistical box can produce ab=f(x,y)a\oplus b = f(x, y) with probability >1/2>1/2 for all (x,y)(x, y) (Islam et al., 2012). Probabilistic computation of the halting problem, or general undecidable predicates, thereby remains forbidden even if such correlations are consistent with standard no-signalling.

Quantum theory already limits non-locality (e.g., Tsirelson’s bound) below the no-signalling polytope. Computability theory thus introduces a further orthogonal barrier: only those correlations permitting classical (or quantum Turing machine) simulation are physically admissible (Islam et al., 2012).

5. Limit Computability, Arithmetical Hierarchy, and Global Optimization

Limit computability (Shoenfield’s limit lemma, Δ2\Delta_2 sets) captures functions for which outputs can be approximated by a convergent recursive process. A function F:NNNNF : \mathbb{N}^\mathbb{N} \to \mathbb{N}^\mathbb{N} is limit computable iff F=limGF=\lim \circ G with GG computable, or equivalently F=HJF=H \circ J with HH computable and JJ the Turing jump (Brattka, 2018). The framework exposes a hierarchy:

  • Computable \subset limit computable \subset $0'$-computable.

On computable metric spaces, ff is $0'$-computable iff it is limit computable, continuous, and its modulus is $0'$-computable. For instance, Lipschitz continuous limit computable functions are automatically $0'$-computable; 1-generic points are canonical loci where limit computability aligns with computability relative to the halting problem (Brattka, 2018).

Global optimization of continuous functions on compact sets, by contrast, is not limit computable: checking whether a candidate is a global minimizer involves universal quantification over a continuum and is Π2\Pi_2-hard. Only local optimization is Δ2\Delta_2-computable. However, partial relaxations (e.g., known basin-of-attraction size) reintroduce computability for constrained cases (Lakshmanan, 2019).

6. Memory-Bounded Computation in Dynamical Systems

Physical systems with bounded memory MM are restricted to computations feasible in SPACE(MO(1))\mathsf{SPACE}(M^{O(1)}); the Space-Bounded Church-Turing Thesis (SBCT) formalizes this claim (Braverman et al., 2019). For a noisy dynamical system, memory is quantified by the mutual information I(Xt;Xt+1)I(X_t ; X_{t+1}), and computational simulation of the invariant measure to 2n2^{-n} precision is possible in SPACE((M+logn)O(1))\mathsf{SPACE}((M+\log n)^{O(1)}).

Noise enforces a finite number of distinguishable states, capping computational hardness even if time is unbounded. Embedding the space hierarchy theorem shows these bounds are tight: no closed finite-memory system can realize computations beyond its space class, and the steady-state is always computable for nonzero noise.

7. Architectural Barriers in Neural and Deep Learning Systems

LLMs expose a structural gap between comprehension and execution: instruction knowledge (“how to do”) is dissociated from computational competence (“actually doing”). Controlled experiments reveal that LLMs achieve near-perfect correctness at step-wise decomposition, yet overall aggregation (e.g. multi-digit arithmetic) fails catastrophically at scale, despite increasing size or data (Zhang, 14 Jul 2025). Embedding analyses show separate geometric clusters for instruction and execution, with no mechanistic binding between these pathways (“split-brain syndrome”).

Feed-forward networks (FFNs) using ReLU activations implement only piecewise linear functions. By Theorem A.2 in (Zhang, 14 Jul 2025), exact symbolic operations (e.g., multiplication) over unbounded domains cannot be realized in this regime. LLMs thus serve as powerful pattern interpreters but not true algorithmic engines; classical computability hierarchy places them below general symbolic reasoners.

Hybrid methods (neuro-symbolic architectures, explicit variable binding, metacognitive control) are suggested as future directions to overcome these barriers, but structural constraints preclude pattern completion models from attaining general compositional abilities or unbounded symbolic reasoning.


This treatment synthesizes the boundaries imposed by physical laws, mathematical hierarchy, and architectural structure on computation. Each principal regime is characterized by rigorous theorems, explicit scaling laws, and constructive or impossibility proofs across quantum, classical, abstract, and neural contexts. The landscape of computation limits remains defined by tight interaction between resource geometry, logical complexity, and model-specific constraints.

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Limits of Computation.