Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 431 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

On efficiently computable functions, deep networks and sparse compositionality (2510.11942v1)

Published 13 Oct 2025 in cs.LG

Abstract: We show that \emph{efficient Turing computability} at any fixed input/output precision implies the existence of \emph{compositionally sparse} (bounded-fan-in, polynomial-size) DAG representations and of corresponding neural approximants achieving the target precision. Concretely: if $f:[0,1]d\to\Rm$ is computable in time polynomial in the bit-depths, then for every pair of precisions $(n,m_{\mathrm{out}})$ there exists a bounded-fan-in Boolean circuit of size and depth $\poly(n+m_{\mathrm{out}})$ computing the discretized map; replacing each gate by a constant-size neural emulator yields a deep network of size/depth $\poly(n+m_{\mathrm{out}})$ that achieves accuracy $\varepsilon=2{-m_{\mathrm{out}}}$. We also relate these constructions to compositional approximation rates \cite{MhaskarPoggio2016b,poggio_deep_shallow_2017,Poggio2017,Poggio2023HowDS} and to optimization viewed as hierarchical search over sparse structures.

Summary

  • The paper’s main contribution is showing that efficiently computable functions can be represented as compositionally sparse DAGs, enabling effective neural approximations.
  • It details an algorithmic transformation from polynomial-time Turing computations to bounded-fan-in Boolean circuits and neural subnetworks.
  • The study underscores practical optimization advantages by reducing exponential search complexity to polynomial via a sparse compositional structure.

Efficient Turing Computability and Compositional Networks

Introduction

The paper "On efficiently computable functions, deep networks, and sparse compositionality" addresses the interplay between efficient computability and compositional sparsity in the context of deep learning. It establishes that efficiently computable functions can be represented as compositionally sparse DAGs, leading to efficient neural network approximations with bounded local complexity.

Setup: Precision, Discretization, and Families

To relate continuous functions to discrete computational models, the paper introduces quantization methods for input/output precision. A key concept is the discrete map $F_{n,m_{\mathrm{out}}$, which acts on quantized inputs to approximate real-valued functions. Efficient Turing computability is defined in terms of polynomial-time computability at any precision (n,mout)(n, m_{\mathrm{out}}).

Discrete Family and Sparse Representation

Efficiently computable functions possess discrete representation families {Fn,mout\{F_{n,m_{\mathrm{out}}} that are compositionally sparse. Such families are characterized by DAGs with bounded local arity, scaling polynomially with the precision parameters, thereby avoiding the curse of dimensionality.

From Polynomial-Time Computability to Bounded-Fan-In Circuits

Efficiently computable functions can be simulated by polynomial-size Boolean circuits. The construction involves transforming Turing machine computations into circuits with bounded fan-in, reflecting the local update rules. This transformation is algorithmic, providing P-uniform circuit families crucial for structured representation.

Bounded-Fan-In Circuits

For any efficiently computable function, corresponding Boolean circuits with bounded fan-in can be constructed. These circuits have depth and size constraints proportional to the input/output bit-depths, ensuring compositionally sparse DAG representations.

Neural Emulation at Fixed Precision

The paper demonstrates how Boolean circuits can be emulated by neural networks at fixed precision. Each Boolean gate is replaced by a neural subnetwork, maintaining the compositional sparsity and achieving the desired precision through careful error propagation control.

Neural Network Construction and Error Management

Neural subnetworks are used to emulate logic gates with standard activations, ensuring compositional sparsity is preserved. The resultant networks have size and depth that grow polynomially with precision, supporting efficient approximation within specified accuracy bounds.

Relation to Compositional Approximation and Autoregressive Universality

Efficient Turing computability implies compositional sparsity, aligning with established results on approximation theory. The compositional structure enables efficient neural approximations, enhancing training dynamics. Autoregressive universality complements this by showing dataset existences where token predictors achieve universal approximation.

Compositional Advantages in Optimization

Compositional sparsity offers significant optimization advantages: hierarchical procedures reduce search complexity from exponential to polynomial. This brings practical benefits in terms of efficient model training and deployment in real-world applications.

Boolean vs. Real: From Discrete to Smooth Networks

The transition from discrete computation to real-valued functions involves smooth lifting of Boolean circuits, maintaining the sparse structure. This allows efficiently computable functions to leverage deep networks for smooth approximation, bridging the gap between logical and continuous models.

Conclusion

The paper provides a robust framework linking efficient computability to structured, sparse neural representations. This connection supports efficient approximation, optimization, and potential universality, aligning computational and learning theories with practical deep learning applications.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.