Efficient Turing-Computable Functions
- Efficient Turing-computable functions are defined as functions computed in polynomial time with fixed input and output precisions, ensuring uniform error bounds.
- They induce bounded-fan-in, polynomial-size Boolean circuits that provide a compositionally sparse representation of the computation.
- This sparse architecture enables deep neural networks to emulate the circuits efficiently, reducing the curse of dimensionality in high-dimensional approximation and optimization.
Efficient Turing-computable functions are those for which a Turing machine can, at fixed input and output precision, compute output approximations to the desired accuracy in time that is polynomial in the bit-depths of the input and output. Recent research reveals that this computational efficiency imposes unexpectedly strong structural constraints: at any finite discretization, these functions admit “compositionally sparse” representations—specifically, bounded-fan-in, polynomial-size Boolean circuits—which can be emulated efficiently by deep neural networks. This establishes a rigorous correspondence between computational complexity and compositional sparsity, providing a theoretical foundation for the observed efficiency of deep architectures in high-dimensional function approximation and optimization.
1. Formalization of Efficient Turing Computability at Fixed Precision
Let be a function. The notion of “efficient Turing computability at fixed precision” is defined as follows: there exists a Turing machine such that, for each input quantized to bits per dimension, and for a target output precision , computes an output approximation such that
for all , and the running time is . Formally, with quantization for inputs and for outputs, define the finite map
and require to be computable in time polynomial in (Poggio, 13 Oct 2025).
2. Circuit Representations: Bounded-Fan-In and Compositional Sparsity
By classical complexity theory, any Turing machine running in polynomial time on bits can be emulated by a Boolean circuit (size and depth polynomial in ) with bounded fan-in (typically 2 or 3) per gate. This circuit expresses the computation at the chosen discretization as a directed acyclic graph (DAG) in which each node computes a simple (Boolean) function of its small set of parents. Thus, efficient Turing computability at finite precision automatically induces a compositional architecture: the circuit factors the overall computation into local modules of bounded arity , with depth .
The concept of “compositional sparsity” (Editor's term), as precisely formalized in (Poggio, 13 Oct 2025), refers to this bounded-fan-in, polynomial-size DAG structure.
3. Neural Emulation: Efficient Deep Approximants
Replacing each Boolean gate in the circuit with a small neural “emulator” (i.e., a constant-size subnetwork implementing its Boolean function via standard activations such as ReLU or sigmoid), one constructs a deep neural network reflecting the circuit’s connectivity. The overall network thus has size and depth polynomial in and, when presented with -bit inputs, outputs an approximation
for all (Poggio, 13 Oct 2025). The construction leverages quantitative results on neural Boolean function emulation (e.g., [Yarotsky; not in citation set]), establishing that the representational and approximation overhead per gate is bounded.
The summary implication:
- For any efficiently Turing-computable , there exists—for all —a deep network of size and depth polynomial in the total bit-budget achieving uniform accuracy .
4. Precision–Complexity Scaling and Universality
The translation from Turing machine to circuit, then to deep network, preserves efficient scaling with respect to precision. Specifically:
- For target accuracy and -bits input discretization, the network size and depth are .
- This matches standard information-theoretic lower bounds for efficient (i.e., “tractable”) high-dimensional function approximation.
Table 1: Summary of Efficiency Correspondence
| Model | Precision Parameters | Representation Size and Depth | Efficiency Guarantee |
|---|---|---|---|
| Turing Machine | , | Time | |
| Boolean Circuit | , | gates, fan-in | Circuit depth |
| Neural Network | , | nodes/layers | Uniform error |
5. Compositional Approximation Rates and the Curse of Dimensionality
Classical results on compositional function approximation (see [Mhaskar, Poggio] as cited in (Poggio, 13 Oct 2025)) indicate that if a function factors as a composition of local functions, each acting on only a few variables, the required number of basis functions (such as neural units or circuit gates) to reach error scales as , where is the number of active constituents, the local arity, and a smoothness parameter.
The reduction of all efficiently Turing-computable functions (at discretization) to compositional sparse DAGs with small ensures that even for high-dimensional input, the cost in terms of parameters and computational steps is dominated not by the ambient space , but by the complexity of the computation rather than the dimension.
6. Optimization as Hierarchical Search
A further consequence of compositional sparsity is in optimization. Hierarchical search over the space of sparse, local modules (i.e., over a family indexed by the structure of the DAG) is much more tractable than naive global optimization over all possible functions of variables. The total cost of hierarchical search is polynomial in the number of modules (Theorem 6, (Poggio, 13 Oct 2025)), as opposed to exponential in .
Thus, efficient Turing computability not only yields efficient networks for approximation but also for search/optimization, as the solution space is naturally partitioned into locally independent (sparse) subproblems.
7. Broader Implications and Theoretical Significance
The equivalence between computational efficiency and compositional sparsity structures establishes a foundational link between theoretical computer science, circuit complexity, and deep learning. It provides a formal explanation for the empirical observation that deep, modular architectures are especially well-suited for high-precision, high-dimensional function approximation tasks whenever such tasks have an underlying efficient algorithmic structure.
This line of reasoning supplies a rigorous bridge from the theory of efficient (i.e., polynomial-time) Turing computation to the practical construction of deep neural networks with guaranteed approximation and search complexity bounds. It complements existing expressivity results for deep networks, by showing that efficiency constraints inevitably lead to—and are fully captured by—a compositional architecture (Poggio, 13 Oct 2025).
In summary, efficient Turing-computable functions admit explicit, efficiently sized, and compositionally structured neural network approximants at any fixed input/output precision, revealing a near-identity between computational efficiency and compositional circuit/network sparsity.