RB-DeepONet: Hybrid Operator Learning
- RB-DeepONet is a hybrid operator-learning framework that fuses a branch–trunk neural network with reduced basis methods to approximate parametric PDE solutions.
- It utilizes a fixed offline-constructed RB trunk and a branch network that outputs low-dimensional coefficients, ensuring robust error control and real-time evaluation.
- The method achieves competitive accuracy with dramatically fewer degrees of freedom, offering scalable performance and enhanced interpretability over traditional approaches.
RB-DeepONet is a hybrid operator-learning framework designed to efficiently and accurately approximate parametric Partial Differential Equation (PDE) solution operators. It combines the branch–trunk neural network architecture of DeepONet with structural elements and stability properties from Reduced Basis (RB) methods. The trunk is fixed to a deterministically constructed RB basis, providing interpretability, offline–online separation, and robust error control, while the branch neural network outputs only the RB coefficients. RB-DeepONet addresses scenarios with independently varying parameters, boundary, and source data, utilizing compressed modal encodings for these components and is trained label-free via projected variational residuals. It achieves accuracy competitive with intrusive RB-Galerkin and POD-DeepONet surrogates while requiring dramatically fewer degrees of freedom for both training and online evaluation (Wang et al., 23 Nov 2025).
1. Mathematical Structure and Operator Representation
RB-DeepONet approximates solution maps of the form , where solves a parametric PDE. The approximation leverages an RB expansion: with a fixed RB trunk and an -dimensional output from the branch neural network . In matrix form, . The RB trunk is constructed offline, and , the full-order FE space dimension. The branch network thus learns a low-dimensional mapping from parameter (and, where relevant, boundary/source encoding) space to RB coefficients.
2. Offline Reduced Basis Construction
The RB trunk is generated via a Greedy algorithm over a training parameter set . At each iteration, the parameter with the largest a posteriori predicted error is selected:
- Compute the reduced Galerkin solution for in the current RB space.
- Form the residual .
- Estimate the error using , where is the coercivity constant.
- Select maximizing , solve for high-fidelity , orthonormalize, and enrich .
This robust process ensures the trunk is physically meaningful and tailored to the problem manifold (Wang et al., 23 Nov 2025).
3. Label-Free Training via Projected Variational Residual
Training does not require full-field output supervision. With the trunk fixed, the branch network outputs RB coefficients , which are optimized by minimizing the projected Galerkin residual: where , . This structure enforces that the RB-DeepONet output approaches the RB-Galerkin solution for each , guaranteeing that the physics and numerical structure are preserved (Wang et al., 23 Nov 2025).
4. Encoding of Boundary and Source Data
When boundary data or source data vary independently of physical parameters, RB-DeepONet compresses these exogenous data via modal encodings:
- Boundary modes: Constructed by Greedy selection in the Dirichlet-trace norm, yielding modes and coordinates .
- Source modes: Source functionals are mapped to their Riesz representers, with compact bases and . These modes allow the right-hand side to be completely assembled from low-dimensional representations, making online costs independent of the full dimension of and (Wang et al., 23 Nov 2025).
5. Offline–Online Computational Workflow
RB-DeepONet enforces a strict offline–online split:
- Offline: RB trunk construction via the greedy algorithm; affine or EIM decomposition of operators ; computation of reduced blocks , .
- Online: Assembly of reduced matrices , ; evaluation of the branch network ; computation of the projected residual—all scaling with the RB dimension only. This decomposition removes dependence of online complexity on the full-order FE mesh size , yielding scalable real-time evaluation (Wang et al., 23 Nov 2025).
6. Mathematical Guarantees
Under standard regularity assumptions (coercivity, Lipschitz parameter dependence, bounded hypothesis class capacity), RB-DeepONet satisfies the following:
- Network approximation: (universal approximation in coefficient space).
- Empirical–population uniform convergence: For finite samples and network size, empirical and population losses converge with high probability.
- Consistency: The learned network coefficients converge to the true RB-Galerkin predictions as network size and sample count increase. Error in the overall surrogate separates RB truncation error from statistical error (Wang et al., 23 Nov 2025).
7. Numerical Performance and Practical Comparison
RB-DeepONet demonstrates competitive accuracy and dramatic efficiency across canonical parametric elliptic PDEs:
- For a 2D linear conduction case (disk-in-domain, two parameters), RB-DeepONet achieves relative errors ≈ with only RB modes and network parameters, matching POD-DeepONet with trunk and being within a small factor of intrusive RB-Galerkin errors.
- For problems with independently encoded exogenous data (source/boundary), RB-DeepONet reduces encoding to source and boundary coefficients, and RB modes—requiring only $337$ offline high-fidelity solves, versus for full POD trunk construction.
- Online evaluation cost is or , representing – speedup over full-FE surrogates.
A summary table of computational costs and parameter counts from three test cases:
| Example | RB HF solves | POD HF solves | FEONet HF solves | Trunk size | Trainable Params (RB/POD) | FEONet Params |
|---|---|---|---|---|---|---|
| Case 1 | 3 | 2100 | 0 | 3 | ~2·10⁵ | ~7.3·10⁵ |
| Case 2 | 337 | 10,000 | 0 | 209 | ~2.9·10⁵ | ~1.25·10⁶ |
| Case 3 | 5 | 4000 | 0 | 5 | ~2·10⁵ | – |
In all evaluated scenarios, RB-DeepONet equaled or closely matched the accuracy of POD-DeepONet and FEONet while exhibiting significant efficiency and interpretability advantages (Wang et al., 23 Nov 2025).
8. Connections to RandONet and Further Directions
RB-DeepONet can be viewed as an interpretable variant of architectures such as RandONet (Fabiani et al., 8 Jun 2024), in which deterministic reduced-basis functions replace random features in the branch or trunk. The RB-DeepONet formulation retains universality by augmenting the RB trunk with sufficient flexibility in the branch network and could be extended to hybrid settings where RB components are combined with random or data-adaptive embeddings for multi-scale phenomena. Prospective research includes the use of kernel-POD or spectral embeddings, hybrid RB–randomized architectures, and adaptive identification of low-dimensional structure via regularization or multi-level decompositions (Fabiani et al., 8 Jun 2024).
RB-DeepONet fuses classical model-order reduction with modern operator learning, providing an efficient, stable, and interpretable surrogate for high-dimensional PDE systems with rigorous offline–online separation, certified error control, and superior computational efficiency relative to deep or fully data-driven alternatives (Wang et al., 23 Nov 2025).