Papers
Topics
Authors
Recent
Search
2000 character limit reached

Empirical Quadrature Methods

Updated 17 December 2025
  • Empirical Quadrature is a data-driven method that constructs sparse, nonnegative rules to efficiently approximate complex integrals in reduced-order modeling.
  • It leverages snapshot data and optimization techniques such as ℓ1, ℓp minimization, and greedy OMP to tailor the quadrature rule with high accuracy and reduced computational cost.
  • Applications span hyperreduction in nonlinear PDEs, uncertainty quantification, and kernel cubature, offering significant memory and speed improvements in large-scale simulations.

Empirical quadrature is a data-driven methodology for constructing sparse, nonnegative quadrature rules tailored to families of parametrized integrals or nonlinear terms arising in model reduction, uncertainty quantification, and kernel-based approximation. Unlike classical rules derived from polynomials or moment constraints, empirical quadrature (EQ) selects both nodes and weights by optimizing their ability to represent a given finite set of function evaluations, often with additional constraints such as nonnegativity, sparsity, or structural preservation. EQ has become a central hyper-reduction tool in projection-based reduced-order modeling (ROM), efficient dual-norm computation, and sparse kernel integration. Recent developments include advanced optimization strategies for weight recovery and algorithmic frameworks to address the computational scale of modern applications.

1. Empirical Quadrature: Definition, Role, and Scope

Empirical quadrature seeks to approximate integrals or large sums, typically over high-dimensional or locally supported nonlinearities, by a weighted sum involving only a small, adaptively chosen subset of points or cells. Denoting f:RNRNf:\mathbb{R}^N \to \mathbb{R}^N as a discrete high-dimensional nonlinearity and V,WRN×rV, W \in \mathbb{R}^{N\times r} as trial and test space bases (WV=IW^\top V = I), a reduced-order model computes the projected nonlinearity

N(x)=Wf(Vx),xRr,\mathcal{N}(x) = W^\top f(Vx),\quad x\in\mathbb{R}^r,

which still incurs O(N)O(N) cost. The empirical quadrature approximation replaces the full sum with a sparse rule: N(x)Nc(x):=mIcwmβm(f(Vx),ϕn),\mathcal{N}(x) \approx \mathcal{N}^c(x) := \sum_{m\in I_c} w_m\,\beta^m(f(Vx),\phi^n), where Ic{1,,M}I_c \subset \{1, \dots, M\}, wm0w_m \geq 0, and βm(,)\beta^m(\cdot, \cdot) is localized (e.g., pointwise or cell-wise contribution) (Liljegren-Sailer, 16 Dec 2025, Taddei, 2018, Mirhoseini et al., 2023). This direct sparsification yields O(Ic)O(N)O(|I_c|)\ll O(N) online complexity while maintaining high-fidelity nonlinear approximation. Empirical quadrature is thus a primary complexity reduction mechanism in hyperreduced ROMs.

EQ’s empirical nature refers to the use of snapshot data from a training set—spanning representative parameter values or solution behaviors—to build the rule, as opposed to relying solely on analytic or a priori properties of the integrand (Manucci et al., 2020). Applications extend to dual norm calculation in residual estimates, hyperreduction in minimum-residual ROMs, and kernel-based cubature (Belhadji, 2023).

2. Optimization Formulations for Quadrature Rule Recovery

Construction of empirical quadrature rules is posed as a sparse nonnegative recovery problem with affine or nearly affine constraints given by snapshot data: minwRNw0    s.t.    Awb2ϵ,  w0,\min_{w \in \mathbb{R}^N} \|w\|_0 \;\; \text{s.t.} \;\; \|A w - b\|_2 \leq \epsilon,\; w \geq 0, where ARR×NA\in\mathbb{R}^{R \times N} represents evaluation of KK snapshot functions fkf_k at NN candidate nodes (and possibly multiple parameter values or test functions), bb are the corresponding “truth” integral values, and ϵ\epsilon is an admissible residual (Manucci et al., 2020). Practical implementations convexify 0\ell_0 to 1\ell_1 minimization or employ p\ell^p ($0minw0i=1Nwip    s.t.    Awb2ϵ\min_{w \geq 0} \sum_{i=1}^N |w_i|^p \;\; \text{s.t.} \;\; \|A w - b\|_2 \leq \epsilon with explicit sum constraints (e.g., iwi=Ω\sum_i w_i = |\Omega|) added for normalization. The “focal underdetermined system solver” (FOCUSS) is one popular algorithm for the non-convex p\ell^p case, yielding more compact rules than 1\ell^1 but at greater offline computational complexity (Manucci et al., 2020).

For large-scale PDE reductions, the selection proceeds via greedy cardinality-constrained nonnegative least squares (NNLS), typified by Orthogonal Matching Pursuit (OMP), with gradient-based support selection and restricted NNLS solves at each step (Liljegren-Sailer, 16 Dec 2025, Mirhoseini et al., 2023). Volume conservation, mesh distortion, and other physical constraints may be incorporated into the feasible set.

Table: Quadrature Optimization Strategies

Method Cost Scaling Advantages
1\ell^1-LP Linear in NN Robust, convex, moderate sparsity
p\ell^p-FOCUSS O(R3)O(R^3) (post-SVD) Higher sparsity, tighter tolerance
OMP Iterative, O(MK)O(MK) Greedy, scalable, interpretable

3. Empirical Quadrature in Reduced-Order Modeling

EQ is a core hyperreduction tool in projection-based model order reduction of nonlinear PDEs. In the standard framework (Liljegren-Sailer, 16 Dec 2025), one seeks to replace

bquad(f(u),ϕn)=m=1Mw~mf(u)(xm)ϕn(xm)b^{\text{quad}}(f(u),\phi^n) = \sum_{m=1}^M \tilde{w}_m f(u)(x_m)\phi^n(x_m)

with a sparse surrogate sum. The snapshot manifold is encoded as a large matrix ARKr×MA \in \mathbb{R}^{Kr\times M}. The EQ selection (offline) identifies a subset IcI_c and corresponding weights ww such that all essential bilinear forms are accurately integrated on the reduced space.

Empirical quadrature underpins efficient implementations of the Discrete Empirical Interpolation Method (DEIM), Empirical Cubature Methods (ECM), and is pivotal for minimum-residual ROMs in the presence of nonlinear terms (Taddei, 2018, Mirhoseini et al., 2023). For residual-based ROMs, constraint satisfaction at each training parameter is enforced via weight optimization, and the resulting sparse stencil yields efficient online assembly of the reduced residual and its derivatives.

EQ is also integrated with “empirical test spaces” (ES) for fast, certified dual-norm assessment of parameterized functionals: the test space is reduced by POD, and an EQ is constructed to optimally quadrature integrals on the reduced test manifold (Taddei, 2018).

4. Algorithmic and Computational Advances: Structured Compression

A principal challenge is the infeasibility of directly forming or factorizing the massive snapshot matrix AA for large models. Structured compression exploits the multilinear (tensor) nature of AA to produce factorizations A=NGA = NG—where NN encodes test function structure and GG stores snapshot data—and applies low-rank approximations to GG only (Liljegren-Sailer, 16 Dec 2025). Algorithm CPCA performs:

  1. Factor AA as NGN G (NN block/tridiagonal structured).
  2. Thin QR: N=QRN = Q R (diagonal RR, cheap for block-diagonal NN).
  3. Truncated SVD: RGU1Σ1V1R G \approx U_1 \Sigma_1 V_1^\top.
  4. Form compressed Gt=R1U1Σ1G_t = R^{-1} U_1 \Sigma_1.
  5. The compressed manifold A~=NGt\tilde{A} = N G_t reduces storage and cost by \gtrsim1 order of magnitude.

The compressed matrix, now of size O(Kr×M)O(K' r \times M) (KKK' \ll K), is used as a drop-in for greedy weight selection (OMP). All snapshot iterations and memory depend only on KK', not the original, much larger KrK r.

Benchmarks on large 3D reaction-diffusion and gas-network PDEs demonstrate a \sim\textbf{10–100×\times} speedup in training and 20×\gtrsim 20\times memory reduction with negligible accuracy compromise (Liljegren-Sailer, 16 Dec 2025).

5. Error Analysis and Accuracy Guarantees

EQ admits both a posteriori and a priori error estimates. For the canonical setting, let F(u)=A(uu~)2F(u) = \|A(u - \tilde{u})\|_2 and A~\tilde{A} the compressed version: F(u)F~(u)+κuu~,F(u) \leq \tilde{F}(u) + \kappa \|u - \tilde{u}\|,

F(u)F~(u)+κ(Mc/dmin(ϵ+du~)+u~),F(u) \leq \tilde{F}(u) + \kappa\left(\sqrt{M_c/d_{\min}}(\epsilon + d^\top \tilde{u}) + \|\tilde{u}\|\right),

where κ=AA~F\kappa = \|A - \tilde{A}\|_F (Liljegren-Sailer, 16 Dec 2025). By choosing the truncation rank KK' in CPCA to make discarded singular values negligible, the cost function in compressed coordinates approximates the true loss up to O(κ)O(\kappa).

Similarly, for generic EQ,

maxkIkfull(μ)Iksparse(μ)(w2+Ω)(i=R+1rank(A)σi2)1/2+ϵ1Sf+2ΩLfΔ,\max_k |I_k^{\text{full}}(\mu)-I_k^{\text{sparse}}(\mu)| \leq (\|w\|_2 + |\Omega|) \left(\sum_{i=R+1}^{\text{rank}(A)} \sigma_i^2\right)^{1/2} + \epsilon_1 S_f + 2|\Omega|L_f\Delta,

with error controlled by the SVD truncation, residual tolerance, and the training set density in parameter space (Manucci et al., 2020).

6. Domain-Specific Applications and Empirical Results

Empirical quadrature is deployed in diverse contexts:

  • Dual-norm estimation for error certificates in certified ROMs, vastly reducing offline/online costs compared to classical “approximation-then-integration” (ATI) surrogates, especially when test space reduction makes JesMJ_{\text{es}} \ll M (Taddei, 2018).
  • Convection-dominated flows—EQ-based hyperreduction enables mesh-independent online complexity, matching full-order accuracy with \lesssim5–10% of elements retained (Mirhoseini et al., 2023).
  • Nonlinear Schrödinger and diffusion PDEs—FOCUSS-based p\ell^p-minimization matches or outperforms 1\ell^1- or nonnegative-least-squares-based cubature in both sparsity and attainable accuracy (Manucci et al., 2020).
  • Kernel cubature and worst-case integration error—EZQ achieves O(j>Nλj)\mathcal{O}\left(\sum_{j>N}\lambda_j\right) error rates under DPP node selection, matching minimax rates under CVS (Belhadji, 2023).

Empirical quadrature thus enables order-of-magnitude reductions in memory and CPU cost while preserving accuracy, with direct impact on large-scale nonlinear model reduction.

7. Connections, Variants, and Best Practices

EQ subsumes and generalizes empirical cubature, DEIM/ED, Kohn–Sham cubature, and kernel-based interpolation rules, provided the requisite bilinear or multilinear structure is present. Critical recommendations include (Liljegren-Sailer, 16 Dec 2025):

  • Set the structured-compression rank slightly above the desired quadrature sparsity.
  • Ensure the regularization weights are strictly positive to avoid degenerate solutions.
  • Exploit block-diagonal or locality structure of test spaces for computational gains.
  • Monitor singular value decay in compressed snapshot matrices to control approximation error.
  • Parallelize snapshot assembly and SVD for maximal scale.
  • Integrate EQ with test space reduction for greatest overall cost savings when the number of relevant test functionals is small.

A plausible implication is that as model sizes and nonlinearity complexities continue to grow, structured and data-adaptive variants of EQ will become increasingly necessary for scalable model reduction, certified error control, and data-driven kernel approximation.


Key references: (Liljegren-Sailer, 16 Dec 2025, Taddei, 2018, Mirhoseini et al., 2023, Manucci et al., 2020, Belhadji, 2023).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Empirical Quadrature.