Empirical Quadrature Methods
- Empirical Quadrature is a data-driven method that constructs sparse, nonnegative rules to efficiently approximate complex integrals in reduced-order modeling.
- It leverages snapshot data and optimization techniques such as ℓ1, ℓp minimization, and greedy OMP to tailor the quadrature rule with high accuracy and reduced computational cost.
- Applications span hyperreduction in nonlinear PDEs, uncertainty quantification, and kernel cubature, offering significant memory and speed improvements in large-scale simulations.
Empirical quadrature is a data-driven methodology for constructing sparse, nonnegative quadrature rules tailored to families of parametrized integrals or nonlinear terms arising in model reduction, uncertainty quantification, and kernel-based approximation. Unlike classical rules derived from polynomials or moment constraints, empirical quadrature (EQ) selects both nodes and weights by optimizing their ability to represent a given finite set of function evaluations, often with additional constraints such as nonnegativity, sparsity, or structural preservation. EQ has become a central hyper-reduction tool in projection-based reduced-order modeling (ROM), efficient dual-norm computation, and sparse kernel integration. Recent developments include advanced optimization strategies for weight recovery and algorithmic frameworks to address the computational scale of modern applications.
1. Empirical Quadrature: Definition, Role, and Scope
Empirical quadrature seeks to approximate integrals or large sums, typically over high-dimensional or locally supported nonlinearities, by a weighted sum involving only a small, adaptively chosen subset of points or cells. Denoting as a discrete high-dimensional nonlinearity and as trial and test space bases (), a reduced-order model computes the projected nonlinearity
which still incurs cost. The empirical quadrature approximation replaces the full sum with a sparse rule: where , , and is localized (e.g., pointwise or cell-wise contribution) (Liljegren-Sailer, 16 Dec 2025, Taddei, 2018, Mirhoseini et al., 2023). This direct sparsification yields online complexity while maintaining high-fidelity nonlinear approximation. Empirical quadrature is thus a primary complexity reduction mechanism in hyperreduced ROMs.
EQ’s empirical nature refers to the use of snapshot data from a training set—spanning representative parameter values or solution behaviors—to build the rule, as opposed to relying solely on analytic or a priori properties of the integrand (Manucci et al., 2020). Applications extend to dual norm calculation in residual estimates, hyperreduction in minimum-residual ROMs, and kernel-based cubature (Belhadji, 2023).
2. Optimization Formulations for Quadrature Rule Recovery
Construction of empirical quadrature rules is posed as a sparse nonnegative recovery problem with affine or nearly affine constraints given by snapshot data: where represents evaluation of snapshot functions at candidate nodes (and possibly multiple parameter values or test functions), are the corresponding “truth” integral values, and is an admissible residual (Manucci et al., 2020). Practical implementations convexify to minimization or employ ($0
with explicit sum constraints (e.g., ) added for normalization. The “focal underdetermined system solver” (FOCUSS) is one popular algorithm for the non-convex case, yielding more compact rules than but at greater offline computational complexity (Manucci et al., 2020).
For large-scale PDE reductions, the selection proceeds via greedy cardinality-constrained nonnegative least squares (NNLS), typified by Orthogonal Matching Pursuit (OMP), with gradient-based support selection and restricted NNLS solves at each step (Liljegren-Sailer, 16 Dec 2025, Mirhoseini et al., 2023). Volume conservation, mesh distortion, and other physical constraints may be incorporated into the feasible set.
Table: Quadrature Optimization Strategies
| Method | Cost Scaling | Advantages |
|---|---|---|
| -LP | Linear in | Robust, convex, moderate sparsity |
| -FOCUSS | (post-SVD) | Higher sparsity, tighter tolerance |
| OMP | Iterative, | Greedy, scalable, interpretable |
3. Empirical Quadrature in Reduced-Order Modeling
EQ is a core hyperreduction tool in projection-based model order reduction of nonlinear PDEs. In the standard framework (Liljegren-Sailer, 16 Dec 2025), one seeks to replace
with a sparse surrogate sum. The snapshot manifold is encoded as a large matrix . The EQ selection (offline) identifies a subset and corresponding weights such that all essential bilinear forms are accurately integrated on the reduced space.
Empirical quadrature underpins efficient implementations of the Discrete Empirical Interpolation Method (DEIM), Empirical Cubature Methods (ECM), and is pivotal for minimum-residual ROMs in the presence of nonlinear terms (Taddei, 2018, Mirhoseini et al., 2023). For residual-based ROMs, constraint satisfaction at each training parameter is enforced via weight optimization, and the resulting sparse stencil yields efficient online assembly of the reduced residual and its derivatives.
EQ is also integrated with “empirical test spaces” (ES) for fast, certified dual-norm assessment of parameterized functionals: the test space is reduced by POD, and an EQ is constructed to optimally quadrature integrals on the reduced test manifold (Taddei, 2018).
4. Algorithmic and Computational Advances: Structured Compression
A principal challenge is the infeasibility of directly forming or factorizing the massive snapshot matrix for large models. Structured compression exploits the multilinear (tensor) nature of to produce factorizations —where encodes test function structure and stores snapshot data—and applies low-rank approximations to only (Liljegren-Sailer, 16 Dec 2025). Algorithm CPCA performs:
- Factor as ( block/tridiagonal structured).
- Thin QR: (diagonal , cheap for block-diagonal ).
- Truncated SVD: .
- Form compressed .
- The compressed manifold reduces storage and cost by 1 order of magnitude.
The compressed matrix, now of size (), is used as a drop-in for greedy weight selection (OMP). All snapshot iterations and memory depend only on , not the original, much larger .
Benchmarks on large 3D reaction-diffusion and gas-network PDEs demonstrate a \textbf{10–100} speedup in training and memory reduction with negligible accuracy compromise (Liljegren-Sailer, 16 Dec 2025).
5. Error Analysis and Accuracy Guarantees
EQ admits both a posteriori and a priori error estimates. For the canonical setting, let and the compressed version:
where (Liljegren-Sailer, 16 Dec 2025). By choosing the truncation rank in CPCA to make discarded singular values negligible, the cost function in compressed coordinates approximates the true loss up to .
Similarly, for generic EQ,
with error controlled by the SVD truncation, residual tolerance, and the training set density in parameter space (Manucci et al., 2020).
6. Domain-Specific Applications and Empirical Results
Empirical quadrature is deployed in diverse contexts:
- Dual-norm estimation for error certificates in certified ROMs, vastly reducing offline/online costs compared to classical “approximation-then-integration” (ATI) surrogates, especially when test space reduction makes (Taddei, 2018).
- Convection-dominated flows—EQ-based hyperreduction enables mesh-independent online complexity, matching full-order accuracy with 5–10% of elements retained (Mirhoseini et al., 2023).
- Nonlinear Schrödinger and diffusion PDEs—FOCUSS-based -minimization matches or outperforms - or nonnegative-least-squares-based cubature in both sparsity and attainable accuracy (Manucci et al., 2020).
- Kernel cubature and worst-case integration error—EZQ achieves error rates under DPP node selection, matching minimax rates under CVS (Belhadji, 2023).
Empirical quadrature thus enables order-of-magnitude reductions in memory and CPU cost while preserving accuracy, with direct impact on large-scale nonlinear model reduction.
7. Connections, Variants, and Best Practices
EQ subsumes and generalizes empirical cubature, DEIM/ED, Kohn–Sham cubature, and kernel-based interpolation rules, provided the requisite bilinear or multilinear structure is present. Critical recommendations include (Liljegren-Sailer, 16 Dec 2025):
- Set the structured-compression rank slightly above the desired quadrature sparsity.
- Ensure the regularization weights are strictly positive to avoid degenerate solutions.
- Exploit block-diagonal or locality structure of test spaces for computational gains.
- Monitor singular value decay in compressed snapshot matrices to control approximation error.
- Parallelize snapshot assembly and SVD for maximal scale.
- Integrate EQ with test space reduction for greatest overall cost savings when the number of relevant test functionals is small.
A plausible implication is that as model sizes and nonlinearity complexities continue to grow, structured and data-adaptive variants of EQ will become increasingly necessary for scalable model reduction, certified error control, and data-driven kernel approximation.
Key references: (Liljegren-Sailer, 16 Dec 2025, Taddei, 2018, Mirhoseini et al., 2023, Manucci et al., 2020, Belhadji, 2023).