Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 22 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 426 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Efficient Turing-Computable Functions

Updated 26 October 2025
  • Efficient Turing-computable functions are defined as functions computed in polynomial time with fixed input and output precisions, ensuring uniform error bounds.
  • They induce bounded-fan-in, polynomial-size Boolean circuits that provide a compositionally sparse representation of the computation.
  • This sparse architecture enables deep neural networks to emulate the circuits efficiently, reducing the curse of dimensionality in high-dimensional approximation and optimization.

Efficient Turing-computable functions are those for which a Turing machine can, at fixed input and output precision, compute output approximations to the desired accuracy in time that is polynomial in the bit-depths of the input and output. Recent research reveals that this computational efficiency imposes unexpectedly strong structural constraints: at any finite discretization, these functions admit “compositionally sparse” representations—specifically, bounded-fan-in, polynomial-size Boolean circuits—which can be emulated efficiently by deep neural networks. This establishes a rigorous correspondence between computational complexity and compositional sparsity, providing a theoretical foundation for the observed efficiency of deep architectures in high-dimensional function approximation and optimization.

1. Formalization of Efficient Turing Computability at Fixed Precision

Let f:[0,1]dRmf:[0,1]^d \to \mathbb{R}^m be a function. The notion of “efficient Turing computability at fixed precision” is defined as follows: there exists a Turing machine MM such that, for each input quantized to nn bits per dimension, and for a target output precision moutm_{\mathrm{out}}, MM computes an output approximation f^\hat f such that

f^(x)f(x)<2mout\|\hat f(x) - f(x)\|_\infty < 2^{-m_{\mathrm{out}}}

for all xx, and the running time is poly(n+mout)\mathrm{poly}(n + m_{\mathrm{out}}). Formally, with quantization QnQ_n for inputs and QmoutoutQ^{\mathrm{out}}_{m_{\mathrm{out}}} for outputs, define the finite map

Fn,mout=QmoutoutfQn1:{0,1}nd{0,1}moutmF_{n,m_{\mathrm{out}}} = Q^{\mathrm{out}}_{m_{\mathrm{out}}} \circ f \circ Q_n^{-1} : \{0,1\}^{nd} \to \{0,1\}^{m_{\mathrm{out}}m}

and require Fn,moutF_{n,m_{\mathrm{out}}} to be computable in time polynomial in n+moutn + m_{\mathrm{out}} (Poggio, 13 Oct 2025).

2. Circuit Representations: Bounded-Fan-In and Compositional Sparsity

By classical complexity theory, any Turing machine running in polynomial time on n+moutn + m_{\mathrm{out}} bits can be emulated by a Boolean circuit (size and depth polynomial in n+moutn + m_{\mathrm{out}}) with bounded fan-in (typically 2 or 3) per gate. This circuit expresses the computation at the chosen discretization as a directed acyclic graph (DAG) in which each node computes a simple (Boolean) function of its small set of parents. Thus, efficient Turing computability at finite precision automatically induces a compositional architecture: the circuit factors the overall computation into s=poly(n+mout)s = \mathrm{poly}(n + m_{\mathrm{out}}) local modules of bounded arity k3k \leq 3, with depth L=poly(n+mout)L = \mathrm{poly}(n + m_{\mathrm{out}}).

The concept of “compositional sparsity” (Editor's term), as precisely formalized in (Poggio, 13 Oct 2025), refers to this bounded-fan-in, polynomial-size DAG structure.

3. Neural Emulation: Efficient Deep Approximants

Replacing each Boolean gate in the circuit with a small neural “emulator” (i.e., a constant-size subnetwork implementing its Boolean function via standard activations such as ReLU or sigmoid), one constructs a deep neural network Φn,mout\Phi_{n,m_{\mathrm{out}}} reflecting the circuit’s connectivity. The overall network thus has size and depth polynomial in n+moutn + m_{\mathrm{out}} and, when presented with nn-bit inputs, outputs an approximation

Φn,mout(Qn(x))f(x)2mout\|\Phi_{n,m_{\mathrm{out}}}(Q_n(x)) - f(x)\|_\infty \leq 2^{-m_{\mathrm{out}}}

for all xx (Poggio, 13 Oct 2025). The construction leverages quantitative results on neural Boolean function emulation (e.g., [Yarotsky; not in citation set]), establishing that the representational and approximation overhead per gate is bounded.

The summary implication:

  • For any efficiently Turing-computable ff, there exists—for all n,moutn, m_{\mathrm{out}}—a deep network of size and depth polynomial in the total bit-budget achieving uniform accuracy 2mout2^{-m_{\mathrm{out}}}.

4. Precision–Complexity Scaling and Universality

The translation from Turing machine to circuit, then to deep network, preserves efficient scaling with respect to precision. Specifically:

  • For target accuracy ε=2mout\varepsilon = 2^{-m_{\mathrm{out}}} and nn-bits input discretization, the network size and depth are poly(n+log2(1/ε))\mathrm{poly}(n + \log_2(1/\varepsilon)).
  • This matches standard information-theoretic lower bounds for efficient (i.e., “tractable”) high-dimensional function approximation.

Table 1: Summary of Efficiency Correspondence

Model Precision Parameters Representation Size and Depth Efficiency Guarantee
Turing Machine nn, moutm_{\mathrm{out}} - Time poly(n+mout)\mathrm{poly}(n + m_{\mathrm{out}})
Boolean Circuit nn, moutm_{\mathrm{out}} poly(n+mout)\mathrm{poly}(n + m_{\mathrm{out}}) gates, k3k \leq 3 fan-in Circuit depth poly(n+mout)\mathrm{poly}(n + m_{\mathrm{out}})
Neural Network nn, moutm_{\mathrm{out}} poly(n+mout)\mathrm{poly}(n + m_{\mathrm{out}}) nodes/layers Uniform error 2mout2^{-m_{\mathrm{out}}}

5. Compositional Approximation Rates and the Curse of Dimensionality

Classical results on compositional function approximation (see [Mhaskar, Poggio] as cited in (Poggio, 13 Oct 2025)) indicate that if a function factors as a composition of local functions, each acting on only a few variables, the required number of basis functions (such as neural units or circuit gates) to reach error ε\varepsilon scales as N=O(sεk/r)N = O(s \cdot \varepsilon^{-k/r}), where ss is the number of active constituents, kk the local arity, and rr a smoothness parameter.

The reduction of all efficiently Turing-computable functions (at discretization) to compositional sparse DAGs with small kk ensures that even for high-dimensional input, the cost in terms of parameters and computational steps is dominated not by the ambient space dd, but by the complexity of the computation rather than the dimension.

A further consequence of compositional sparsity is in optimization. Hierarchical search over the space of sparse, local modules (i.e., over a family indexed by the structure of the DAG) is much more tractable than naive global optimization over all possible functions of dd variables. The total cost of hierarchical search is polynomial in the number of modules (Theorem 6, (Poggio, 13 Oct 2025)), as opposed to exponential in dd.

Thus, efficient Turing computability not only yields efficient networks for approximation but also for search/optimization, as the solution space is naturally partitioned into locally independent (sparse) subproblems.

7. Broader Implications and Theoretical Significance

The equivalence between computational efficiency and compositional sparsity structures establishes a foundational link between theoretical computer science, circuit complexity, and deep learning. It provides a formal explanation for the empirical observation that deep, modular architectures are especially well-suited for high-precision, high-dimensional function approximation tasks whenever such tasks have an underlying efficient algorithmic structure.

This line of reasoning supplies a rigorous bridge from the theory of efficient (i.e., polynomial-time) Turing computation to the practical construction of deep neural networks with guaranteed approximation and search complexity bounds. It complements existing expressivity results for deep networks, by showing that efficiency constraints inevitably lead to—and are fully captured by—a compositional architecture (Poggio, 13 Oct 2025).

In summary, efficient Turing-computable functions admit explicit, efficiently sized, and compositionally structured neural network approximants at any fixed input/output precision, revealing a near-identity between computational efficiency and compositional circuit/network sparsity.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Efficient Turing-Computable Functions.