Papers
Topics
Authors
Recent
Search
2000 character limit reached

Chaos Expansion Networks

Updated 3 April 2026
  • Chaos Expansion Networks are surrogate modeling frameworks that fuse spectral chaos expansions with neural computational strategies for uncertainty quantification.
  • They employ adaptive neural parameterizations and empirically learned orthogonal bases to overcome classical polynomial chaos limitations in high-dimensional and stochastic settings.
  • These networks enable efficient, closed-form computation of output moments and Sobol indices, supporting applications in engineering, SPDE operator learning, and Bayesian inference.

Chaos expansion networks are a class of surrogate modeling and operator learning architectures that combine spectral chaos expansions—originally from polynomial chaos expansion (PCE) theory—with neural network or deep probabilistic computational strategies. The principal innovation is the formal embedding of data-driven, spectral (often orthogonal) basis expansions into neural computation graphs, enabling tractable and accurate surrogate modeling, scalable operator learning, and rigorous uncertainty quantification (UQ) in high-dimensional and stochastic settings. These networks generalize classical chaos expansion surrogates by leveraging neural parameterizations and architectural advances to overcome the limitations of traditional PCE in handling arbitrary input distributions, strong dependence, and high parameter dimensionality (Yao et al., 2021, Oladyshkin et al., 2023, Bahmani et al., 17 Feb 2025, Shi et al., 3 Jan 2026, Neufeld et al., 2024, Exenberger et al., 28 Jul 2025).

1. Mathematical Foundations: Chaos Expansions and Neural Parameterizations

Classical polynomial chaos expansions express a response Y=f(ξ)Y = f(\xi) to uncertain inputs ξ∈Rd\xi \in \mathbb{R}^d as a sum over orthogonal polynomials: y^(p)(ξ)=∑i=1MciΦi(ξ)\hat{y}^{(p)}(\xi) = \sum_{i=1}^M c_i \Phi_i(\xi) where {Φi}\{\Phi_i\} is an orthonormal basis w.r.t. the law of ξ\xi, and M=(d+pp)M = \binom{d+p}{p} for total order pp. Classical PCE requires that the distributions of ξ\xi match the Askey scheme (e.g., Hermite for Gaussian). Arbitrary PCE (aPC) replaces this by constructing univariate orthonormal polynomials directly from raw moments, enabling basis construction for inputs with nonparametric or empirical distributions (Yao et al., 2021, Oladyshkin et al., 2023).

Chaos expansion networks extend this paradigm by making the chaos coefficients {ci}\{c_i\} or the basis itself data-adaptive and trainable:

  • Adaptive coefficients: Ci(ξ;θ)C_i(\xi;\theta) parameterized by DNNs, yielding surrogates ξ∈Rd\xi \in \mathbb{R}^d0 (Deep aPCE) (Yao et al., 2021).
  • Neural basis: direct replacement of polynomial basis ξ∈Rd\xi \in \mathbb{R}^d1 by learned neural networks ξ∈Rd\xi \in \mathbb{R}^d2, orthogonalized empirically over the data (Neural Chaos) (Bahmani et al., 17 Feb 2025).
  • Layerwise polynomial expansions: At each node or layer, responses are expanded in an aPC basis constructed on empirical activations (Deep aPC NN) (Oladyshkin et al., 2023).
  • Operator parameterizations: For stochastic PDEs, the chaos coefficients ξ∈Rd\xi \in \mathbb{R}^d3 of a Wiener chaos expansion are learned with neural operator backbones (FNO, GNO, MLP) (Shi et al., 3 Jan 2026, Neufeld et al., 2024).

This framework admits rigorous computation of output moments, variances, and Sobol indices, and directly encodes an uncertainty quantification pipeline within neural surrogates (Exenberger et al., 28 Jul 2025).

2. Learning Algorithms and Semi-Supervision

Chaos expansion networks exploit the orthogonality structure of the chaos basis for efficient training—often via hybrid loss functions blending supervised and unsupervised (semi-supervised) components.

  • Labeled loss: Mean-absolute-error between surrogate and observed outputs over small ξ∈Rd\xi \in \mathbb{R}^d4 labeled samples.
  • Unlabeled regularization: Enforce mean and variance relationships implied by the orthogonality of the chaos basis on large batches of ξ∈Rd\xi \in \mathbb{R}^d5 unlabeled data. These constraints ensure property alignment: network-predicted mean matches first chaos coefficient; variance matches sum of higher-order means squared.
  • Optimization: All terms combined into a global loss and optimized via Adam; the network dynamically refines ξ∈Rd\xi \in \mathbb{R}^d6 per input. This approach yields order-of-magnitude reductions in labeled sample costs with preserved or improved surrogate accuracy.
  • Sequential dictionary learning: The neural basis functions ξ∈Rd\xi \in \mathbb{R}^d7 are added sequentially, each time minimizing the residual error and ensuring empirical orthogonality to the previously learned basis vectors.
  • Variants: Purely continuous (joint optimization of neural ξ∈Rd\xi \in \mathbb{R}^d8 and coefficient networks ξ∈Rd\xi \in \mathbb{R}^d9), or hybrid discrete-continuous with CMD (Canonical Multiplicative Decomposition) substeps for precise orthogonality in sample spaces.
  • Overfitting Control: Early stopping, small width, normalization, and tracking the train/test generalization gap as the basis grows.
  • For stochastic (P)DEs, after projecting noises onto a finite Wick-Hermite basis, the chaos coefficients are learned as continuous operator-valued maps from initial conditions and projected noise to space (and time) indices, fully reconstructing solution trajectories in a single network evaluation.

3. Architecture: Embedding Chaos Expansions in Deep Models

Chaos expansion networks are instantiated in several architectural patterns.

Approach Basis Parameterization Adaptive Elements
Deep aPCE (Yao et al., 2021) Data-driven aPC basis; DNN on coeffs Coefficient network per input
Neural Chaos (Bahmani et al., 17 Feb 2025) NN stochastic basis, orthogonalized (joint) Both basis and trunk networks
Deep aPC NN (Oladyshkin et al., 2023) Layerwise empirical aPC basis per layer Basis structure adapts per layer
DeepPCE (Exenberger et al., 28 Jul 2025) Hierarchical, sum-product circuit of PCEs Multi-layer, decompositional
Chaos-Operator (Shi et al., 3 Jan 2026, Neufeld et al., 2024) Wick-Hermite basis (fixed); operator backbone as NN All chaos coefficients as operator nets

In all cases, orthonormality is either prescribed (classical chaos), constructed empirically (aPC, NN-based), or enforced during training via explicit moment-matching constraints.

DeepPCE and similar approaches exploit sum-product circuit architectures to decompose the exponential combinatorics of classical PCE into manageable, hierarchically composed local expansions. This yields scalability to input dimensions y^(p)(ξ)=∑i=1MciΦi(ξ)\hat{y}^{(p)}(\xi) = \sum_{i=1}^M c_i \Phi_i(\xi)0–y^(p)(ξ)=∑i=1MciΦi(ξ)\hat{y}^{(p)}(\xi) = \sum_{i=1}^M c_i \Phi_i(\xi)1, enabling exact analytic moment and sensitivity calculations in high-dimensional UQ tasks (Exenberger et al., 28 Jul 2025).

4. Uncertainty Quantification and Analytical Properties

A key strength of chaos expansion networks is the preservation of closed-form, rigorously computable statistical properties:

  • Output mean: Coefficient of order-0 basis function.
  • Variance: Sum of squared coefficients of all higher-order terms.
  • Sobol indices: Sums over subsets of coefficients with support on specific input variable groupings.
  • Full trajectory sampling: In SPDE/SDE operator models, a single forward propagation reconstructs the entire stochastic solution via the truncated chaos expansion, bypassing expensive Monte Carlo rollouts (Shi et al., 3 Jan 2026, Neufeld et al., 2024).

These properties are preserved in adaptive and deep settings as long as orthogonality holds at each layer or in the neural basis parameterization. This enables direct, differentiable UQ in complex or operator-valued settings.

5. Scalability, High-dimensionality, and Limitations

Traditional PCE-based surrogates are limited by the combinatorial explosion of the basis: for y^(p)(ξ)=∑i=1MciΦi(ξ)\hat{y}^{(p)}(\xi) = \sum_{i=1}^M c_i \Phi_i(\xi)2 dimensions and degree y^(p)(ξ)=∑i=1MciΦi(ξ)\hat{y}^{(p)}(\xi) = \sum_{i=1}^M c_i \Phi_i(\xi)3, y^(p)(ξ)=∑i=1MciΦi(ξ)\hat{y}^{(p)}(\xi) = \sum_{i=1}^M c_i \Phi_i(\xi)4 terms appear. Chaos expansion networks address this by:

  • Hierarchical partitioning: DeepPCE partitions variables into small regions/scopes; expansions are composed via sum-product circuits.
  • Adaptive chaos coefficients: DNNs dynamically modulate expansion weights, exploiting nonlinearity beyond shallow polynomial truncations.
  • Basis learning: Neural Chaos and Deep aPC NN learn orthogonal bases directly from raw data, overcomes fixed distributional assumptions or strong dependence.

Empirical studies demonstrate that DeepPCE and Deep aPCE achieve error rates and surrogate fidelity in high-y^(p)(ξ)=∑i=1MciΦi(ξ)\hat{y}^{(p)}(\xi) = \sum_{i=1}^M c_i \Phi_i(\xi)5 settings (y^(p)(ξ)=∑i=1MciΦi(ξ)\hat{y}^{(p)}(\xi) = \sum_{i=1}^M c_i \Phi_i(\xi)6–y^(p)(ξ)=∑i=1MciΦi(ξ)\hat{y}^{(p)}(\xi) = \sum_{i=1}^M c_i \Phi_i(\xi)7) that match or exceed classic surrogates, while drastically reducing the need for labeled data or simulations (Yao et al., 2021, Exenberger et al., 28 Jul 2025).

A limitation remains that sample requirements may still grow with dimension in extremely high-dimensional settings (y^(p)(ξ)=∑i=1MciΦi(ξ)\hat{y}^{(p)}(\xi) = \sum_{i=1}^M c_i \Phi_i(\xi)8). Adaptive or sparse basis selection, as well as further architectural innovations, remain open research areas (Yao et al., 2021).

6. Applications and Case Studies

Chaos expansion networks are validated on a wide range of forward modeling, UQ, and operator learning problems:

  • Engineering UQ: Structural reliability (clutch system, satellite frame), high-dimensional stochastic mechanics, thermal–mechanical dynamics. Deep aPCE achieves y^(p)(ξ)=∑i=1MciΦi(ξ)\hat{y}^{(p)}(\xi) = \sum_{i=1}^M c_i \Phi_i(\xi)9 probability error with {Φi}\{\Phi_i\}0 labels versus much higher costs in UDR or classic PCE (Yao et al., 2021).
  • Stochastic PDEs: SPDE/SDE solution operators for Allen–Cahn, stochastic Navier–Stokes, financial (OU, Heston) processes, and graph Schrödinger bridges (Shi et al., 3 Jan 2026, Neufeld et al., 2024).
  • Operator learning: PCE and DeepPCE enable uncertainty-aware, operator-valued surrogates for time-dependent and parametric PDEs, offering analytic UQ with performance rivaling DeepONet and FNO (Sharma et al., 28 Aug 2025, Exenberger et al., 28 Jul 2025).
  • Surrogate acceleration in Bayesian inference: PCE surrogates for generative model–based inverse problems (e.g., GPR tomography), utilizing hybrid VAE–PCE–PCA pipelines (Meles et al., 2023).

Scalable, accurate analytic inference (mean, variance, Sobol indices) is achieved in all cases, with computational costs far below black-box Monte Carlo or MLP-based UQ (Exenberger et al., 28 Jul 2025, Sharma et al., 28 Aug 2025).

7. Theoretical Guarantees and Open Challenges

Chaos expansion networks inherit and extend the approximation theory of PCE: rates depend on the smoothness of the target, basis order, and neural network capacity. Theoretical results provide explicit {Φi}\{\Phi_i\}1 decay in error for random-feature chaos networks, and practical error rates are reported for a variety of SPDE and surrogate modeling benchmarks (Neufeld et al., 2024, Yao et al., 2021).

Challenges include:

  • Adaptive basis truncation and term selection in very high dimensions, to circumvent the exponential scaling of classic PCE.
  • Extension to non-Gaussian and non-Hermite chaos (e.g., Lévy noise, Poisson-Charlier chaos), which require new combinatorial and algorithmic frameworks.
  • Generalization and stability in highly nonlinear or multimodal surrogate regimes.
  • Automated enforcement or optimization of network-based orthogonality constraints in continuous bases (Bahmani et al., 17 Feb 2025).

The integration of data-driven basis learning, deep architectures, and UQ positions chaos expansion networks as a central methodology in scientific machine learning, uncertainty quantification, and stochastic operator approximation.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Chaos Expansion Networks.