Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 103 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 27 tok/s
GPT-5 High 37 tok/s Pro
GPT-4o 92 tok/s
GPT OSS 120B 467 tok/s Pro
Kimi K2 241 tok/s Pro
2000 character limit reached

Polynomial-Based Simulation Approach

Updated 25 August 2025
  • Polynomial-based simulation approach is a method that uses orthogonal polynomial expansions to represent complex stochastic and dynamical systems with reduced computational cost.
  • The paper introduces a convex optimization framework for estimating Polynomial Chaos Expansion coefficients using minimal simulation data while enforcing physical constraints.
  • The method integrates regularization techniques to select low-order terms and accurately reconstruct moments and probability distributions, outperforming classical methods in high-dimensional settings.

A polynomial-based simulation approach refers broadly to the characterization, analysis, or numerical simulation of complex systems by means of polynomial representations. Such approaches leverage the expressive capacity and well-understood properties of polynomial functions—often orthogonal polynomials—to encode, propagate, and efficiently compute the evolution, uncertainty, or structural properties of dynamical and stochastic models. Recent research has developed highly effective polynomial-based simulation techniques for a range of scenarios including stochastic dynamical systems, surrogate modeling, uncertainty quantification, and system reduction. These methods often employ sophisticated tools such as convex optimization, regularization, and orthogonality structures to address the computational challenges posed by high-dimensional or nonlinear models (Fagiano et al., 2012).

1. Polynomial Chaos Expansions and Stochastic Simulation

Polynomial Chaos Expansions (PCE) provide a systematic means of representing any square-integrable random variable as an infinite series of mutually orthogonal polynomials parameterized by a vector of independent random variables. Explicitly, a random process v(θ)v(\theta) can be expanded as

v(θ)=k=0akψk(θ)v(\theta) = \sum_{k=0}^\infty a_k \psi_k(\theta)

where ψk(θ)\psi_k(\theta) are multivariate orthogonal polynomials constructed (via the Askey scheme) as products of univariate polynomials best matched to the distribution of each θj\theta_j. In practice, the series is truncated to LL terms, with the truncated expansion

v~(θ)=k=0L1akψk(θ)\tilde{v}(\theta) = \sum_{k=0}^{L-1} a_k \psi_k(\theta)

facilitating fast, non-intrusive evaluation of moments (mean, variance, higher-order cumulants) and probability distributions from a polynomial function of random inputs. The variance, for example, reduces to

Var[v~(θ)]=k=1L1ak2E[ψk(θ)2].\operatorname{Var}[\tilde{v}(\theta)] = \sum_{k=1}^{L-1} a_k^2 \mathbb{E}[\psi_k(\theta)^2].

Traditionally, computation of the expansion coefficients aka_k required intrusive model manipulation or extensive simulation (e.g., Galerkin projection, probabilistic collocation), which is intractable in high-dimensional settings due to the sharp growth of LL with the number of inputs and expansion order (L=(n+lˉ)!/(n!lˉ!)L = (n + \bar{l})!/(n! \bar{l}!) for nn variables, polynomial order lˉ\bar{l}) (Fagiano et al., 2012).

2. Convex Optimization for PCE Coefficient Estimation

The polynomial-based simulation approach in (Fagiano et al., 2012) introduces an efficient convex optimization framework for determining PCE coefficients with minimal simulation data and no model rewriting. Given a modest set of (νL)(\nu \ll L) simulation data and a large candidate set of polynomials (potentially high lˉ\bar{l}), the method minimizes

Wa1+βΛ~(vdataΦ~a)2\|W a\|_1 + \beta \|\tilde{\Lambda} (v_\text{data} - \tilde{\Phi} a)\|_2

subject to any convex constraints. Here,

  • W=diag(w(lk))W = \mathrm{diag}(w(l_k)) is a diagonal weighting matrix increasing with polynomial order lkl_k to enforce the sparsity and expected decay of higher-order coefficients.
  • The 1\ell_1-regularization term promotes low-complexity (sparse) expansions that select only crucial low-order terms.
  • The 2\ell_2-weighted data fitting term (with Λ~\tilde{\Lambda} containing the input pdf values at the sampled points) ensures fidelity to the available simulation output, demonstrably prioritizing samples in high-probability regions.
  • The scalar β\beta trades off between sparsity and data fit.

This leads to a globally optimal, efficiently solvable convex (in special cases, quadratic) program that is resistant to overfitting (especially as ν/L0\nu/L \to 0).

In contrast to classical approaches, the method

  • avoids model manipulation;
  • accommodates high-dimensional polynomial expansions (LνL \gg \nu);
  • integrates additional stochastic process information (e.g., bounds, moment constraints) via convex constraints, promoting solutions that obey the qualitative structure or physical limits of the simulated system.

3. Diverse Applications and Empirical Effectiveness

Three case studies in (Fagiano et al., 2012) demonstrate the fidelity and computational gains:

Application Dimensionality & PCE Details Notable Outcomes
Nonlinear RLC Circuit 13 vars, lˉ=2\bar{l}=2, L=105L=105 Accurate mean, variance, PDF with only 30 simulations; variance/PDF much better than least-squares.
Chaotic Organizational Search Model 12 vars, Hermite PCE, lˉ=3\bar{l}=3, L=455L=455, 300 samples Moment/quartile evolution closely matched to reference Monte Carlo; convex constraints enforced.
Chemical Oscillator (SSA simulations) 16 vars, lˉ=3\bar{l}=3, L=969L=969, 100 samples Mean, variance, density accurately reconstructed despite modest data and high stochastic dimension.

In all cases, the convex optimization PCE precisely reproduced observable statistics and distributions with orders-of-magnitude fewer runs than required for classic Monte Carlo or traditional least-squares regression. The approach is notably robust to small ν/L\nu/L ratios and extension to models with nonlinear, high-dimensional random inputs.

4. Mathematical Formulation and Integration of Process Knowledge

The polynomial-based approach is underpinned by explicit equations for truncated expansions, moment calculations, and constrained optimization:

  • Expansion: v(θ)v~(θ)=k=0L1akψk(θ)v(\theta) \approx \tilde{v}(\theta) = \sum_{k=0}^{L-1} a_k \psi_k(\theta),
  • Moments: E[v~(θ)]=a0\mathbb{E}[\tilde{v}(\theta)] = a_0; Var[v~(θ)]=k=1L1ak2E[ψk(θ)2]\operatorname{Var}[\tilde{v}(\theta)] = \sum_{k=1}^{L-1} a_k^2 \mathbb{E}[\psi_k(\theta)^2],
  • Convex objective: minimize Wa1+βΛ~(v~dataΦ~a)2\|W a\|_1 + \beta \|\tilde{\Lambda}(\tilde{v}_\text{data}-\tilde{\Phi} a)\|_2,
  • Constraints: explicit variance (k=1L1ak2E[ψk(θ)2]σˉ2)\left(\sum_{k=1}^{L-1} a_k^2 \mathbb{E}[\psi_k(\theta)^2] \leq \bar{\sigma}^2\right), pointwise bounds g(ψ(θ(r))a)0g(\psi(\theta^{(r)}) a) \leq 0, allowing seamless inclusion of model-based requirements such as non-negativity or finite variance, without increasing model complexity.

5. Comparison to Classical Strategies and Scalability

Classic methods (Galerkin/PCE, collocation) are penalized by the prohibitive cost of either model manipulation or high-dimensional sampling. The convex optimization polynomial-based simulation approach scales with the number of basis terms (easily thousands) yet can be run with only tens to hundreds of simulations, given the 1\ell_1-regularized selection. Its compatibility with regularization techniques (weighted 2\ell_2 can be substituted for similar effect) and the convex nature of the optimization assures scalability and stability in practical implementation, irrespective of underlying model nonlinearity or stochastic dimension. High orders of expansion may be attempted initially, with the regularization adaptively pruning irrelevant terms.

6. Extension and Structural Advantages

An important structural property is the natural inclusion of process knowledge or physical constraints via convex optimization. For example, if the response v(θ)v(\theta) is known to satisfy a variance upper bound or must remain nonnegative, these can be imposed as explicit constraints without compromising tractability. This “plug-in” capacity to enforce process-consistent structure is not available in classic least-squares PCE coefficient estimation and is a central advantage of the convex optimization approach.

7. Impact for Simulation and Uncertainty Quantification

The polynomial-based simulation approach described in (Fagiano et al., 2012) enables practitioners to efficiently replace large ensembles of expensive simulations by compact polynomial surrogates that correctly reproduce not just mean responses, but higher-order moments and distributions. Its ability to handle high-dimensional uncertainty and exploit domain information via convex constraints makes it particularly suitable for uncertainty quantification, risk analysis, and robust design across fields as diverse as electronics, neuroscience, and systems biology.

The method’s efficacy at delivering accurate moment and density estimates with limited simulation budgets, while accommodating complex model forms and constraints, constitutes a significant advancement in the simulation of stochastic dynamical systems using polynomial surrogate representations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube