Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 119 tok/s Pro
Kimi K2 180 tok/s Pro
GPT OSS 120B 418 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Deep Learning as the Disciplined Construction of Tame Objects (2509.18025v1)

Published 22 Sep 2025 in math.OC, cs.AI, cs.LG, math.LO, and stat.ML

Abstract: One can see deep-learning models as compositions of functions within the so-called tame geometry. In this expository note, we give an overview of some topics at the interface of tame geometry (also known as o-minimality), optimization theory, and deep learning theory and practice. To do so, we gradually introduce the concepts and tools used to build convergence guarantees for stochastic gradient descent in a general nonsmooth nonconvex, but tame, setting. This illustrates some ways in which tame geometry is a natural mathematical framework for the study of AI systems, especially within Deep Learning.

Summary

  • The paper establishes that nearly all deep learning architectures are definable within o-minimal structures, offering robust convergence guarantees for nonconvex, nonsmooth optimization.
  • It quantitatively shows that approximately 89% of activation functions are captured by key o-minimal frameworks, underpinning the mathematical structure of modern neural networks.
  • The paper bridges theory and practice by demonstrating how automatic differentiation in tame geometries aligns with convergence guarantees, linking rigorous analysis to practical implementation.

Deep Learning as the Disciplined Construction of Tame Objects

Introduction and Motivation

This paper presents a comprehensive exposition of the intersection between tame geometry—specifically o-minimal structures—optimization theory, and deep learning. The central thesis is that deep learning models, when viewed as compositions of elementary functions, are almost always definable within some o-minimal structure. This perspective provides a mathematically rigorous framework for analyzing the properties of deep learning architectures, especially regarding convergence guarantees for optimization algorithms such as stochastic gradient descent (SGD) in nonsmooth, nonconvex settings.

The authors argue that existing theoretical frameworks (e.g., convex analysis) are insufficient for capturing the full complexity of deep learning models, particularly those involving nonsmooth and nonconvex functions like ReLU networks. O-minimality, by contrast, offers both composability and restriction to well-behaved (pathology-free) objects, making it a realistic and prolific framework for the paper of deep learning.

O-minimal Structures and Definability in Deep Learning

O-minimal structures generalize semialgebraic geometry to include a broader class of functions, such as those involving exponentials and restricted analytic functions. The paper details several o-minimal structures relevant to deep learning:

  • Ralg: Semialgebraic sets (polynomial functions).
  • Ran: Restricted real analytic functions.
  • Rexp: Structures including the real exponential function.
  • RPfaff: Pfaffian closure, which includes solutions to certain differential equations.

The majority of activation and loss functions used in deep learning (ReLU, Softplus, GELU, etc.) are shown to be definable in these structures, with RPfaff being sufficient for nearly all practical cases. The composability property ensures that compositions of definable functions remain definable, which is crucial for the layered structure of neural networks.

The authors provide a quantitative estimate that approximately 89% of activation functions surveyed in the literature are definable in one of the main o-minimal structures, with the remainder either involving restricted trigonometric functions or fractional derivatives.

Pathology-Free Optimization and Stratification

A key advantage of o-minimality is the exclusion of pathological objects (e.g., functions with infinite oscillations or undecidable optimization problems). Definable sets and functions enjoy strong regularity properties:

  • Small Sets Theorem: Various notions of "smallness" (finiteness, measure zero, nowhere dense) coincide for definable sets.
  • Dimension Theorem: A unique, coherent notion of dimension exists for definable sets.
  • Stratification Theorems: Any definable set can be partitioned into finitely many smooth manifolds (strata) that fit together in a controlled way (Verdier/Whitney stratifications).

These properties enable the decomposition of nonsmooth, nonconvex functions (such as those arising in deep learning) into regions where the function behaves smoothly, facilitating the analysis of optimization algorithms.

Convergence Guarantees for Stochastic Subgradient Methods

The paper provides a detailed account of how o-minimality underpins convergence guarantees for the Stochastic Subgradient Method (SSM), including SGD. The main results are:

  • Projection Formula: The Clarke subdifferential of a definable locally Lipschitz function can be decomposed into the sum of the Riemannian gradient and the normal space to the stratum at each point.
  • Chain Rule for Definable Functions: Along absolutely continuous curves, the derivative of the function is given by the inner product of the Riemannian gradient and the curve's velocity, almost everywhere.
  • Convergence of SSM: Under mild conditions (bounded iterates, appropriate step sizes), the iterates of SSM converge to Clarke critical points of the objective function, and the sequence of function values converges.

These results are nontrivial, as they apply to general nonsmooth, nonconvex functions encountered in deep learning, and rely fundamentally on the stratification and regularity properties guaranteed by o-minimality.

Automatic Differentiation and Practical Implementation

The authors address the gap between theoretical convergence guarantees and practical implementation, particularly in the context of automatic differentiation (AD) frameworks such as PyTorch and TensorFlow. They highlight that AD methods, when applied to nonsmooth functions, may not always yield elements of the Clarke subdifferential, and that the behavior of AD at nonsmooth points is subtle.

Recent work formalizes AD as conservative set-valued fields, showing that for definable functions, AD returns the correct gradient almost everywhere, and that convergence guarantees for AD-based SSM implementations can be established under the tame geometry framework.

Implications and Future Directions

The adoption of tame geometry as a foundational framework for deep learning has several important implications:

  • Theoretical Guarantees: Provides rigorous convergence results for optimization algorithms in settings previously considered intractable (nonsmooth, nonconvex).
  • Algorithm Design: Enables the disciplined construction of deep learning architectures with predictable optimization behavior.
  • Statistical Learning: Definable hypothesis spaces have finite VC dimension and are PAC learnable, linking o-minimality to generalization guarantees.
  • Computability: Optimization over tame functions is computable in the sense of finding first-order critical points, in contrast to undecidable cases involving trigonometric functions.

Future research may focus on extending these results to broader classes of functions, refining the analysis of AD in nonsmooth settings, and exploring the limits of o-minimal structures (e.g., existence of transexponential functions).

Conclusion

This paper establishes tame geometry, and specifically o-minimality, as a robust and realistic mathematical framework for the analysis and optimization of deep learning models. By demonstrating that nearly all practical deep learning architectures are definable in o-minimal structures, and by leveraging the regularity and stratification properties of tame objects, the authors provide strong theoretical foundations for convergence guarantees and algorithmic design in deep learning. The framework bridges the gap between abstract mathematical theory and practical implementation, offering a disciplined approach to the construction and analysis of AI systems.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 91 likes.

Upgrade to Pro to view all of the tweets about this paper: