Papers
Topics
Authors
Recent
2000 character limit reached

RB-DeepONet: Hybrid Operator Learning

Updated 30 November 2025
  • RB-DeepONet is a hybrid operator-learning framework that fuses a branch–trunk neural network with reduced basis methods to approximate parametric PDE solutions.
  • It utilizes a fixed offline-constructed RB trunk and a branch network that outputs low-dimensional coefficients, ensuring robust error control and real-time evaluation.
  • The method achieves competitive accuracy with dramatically fewer degrees of freedom, offering scalable performance and enhanced interpretability over traditional approaches.

RB-DeepONet is a hybrid operator-learning framework designed to efficiently and accurately approximate parametric Partial Differential Equation (PDE) solution operators. It combines the branch–trunk neural network architecture of DeepONet with structural elements and stability properties from Reduced Basis (RB) methods. The trunk is fixed to a deterministically constructed RB basis, providing interpretability, offline–online separation, and robust error control, while the branch neural network outputs only the RB coefficients. RB-DeepONet addresses scenarios with independently varying parameters, boundary, and source data, utilizing compressed modal encodings for these components and is trained label-free via projected variational residuals. It achieves accuracy competitive with intrusive RB-Galerkin and POD-DeepONet surrogates while requiring dramatically fewer degrees of freedom for both training and online evaluation (Wang et al., 23 Nov 2025).

1. Mathematical Structure and Operator Representation

RB-DeepONet approximates solution maps of the form μu(;μ)\mu \mapsto u(\cdot; \mu), where u(;μ)u(\cdot; \mu) solves a parametric PDE. The approximation leverages an RB expansion: ur(x;μ)=i=1rci(μ)φi(x)u_r(x; \mu) = \sum_{i=1}^r c_i(\mu) \, \varphi_i(x) with a fixed RB trunk {φi}i=1r\{\varphi_i\}_{i=1}^r and an rr-dimensional output from the branch neural network c(μ)c(\mu). In matrix form, ur(x;μ)=Ψ(x)c(μ)u_r(x; \mu) = \Psi(x) \, c(\mu). The RB trunk is constructed offline, and rN0r \ll N_0, the full-order FE space dimension. The branch network thus learns a low-dimensional mapping from parameter (and, where relevant, boundary/source encoding) space to RB coefficients.

2. Offline Reduced Basis Construction

The RB trunk is generated via a Greedy algorithm over a training parameter set S={μk}k=1NkS = \{\mu^k\}_{k=1}^{N_k}. At each iteration, the parameter with the largest a posteriori predicted error is selected:

  • Compute the reduced Galerkin solution wrb(μ)w_{rb}(\mu) for μS\mu \in S in the current RB space.
  • Form the residual r(v;μ)=(v;μ)a(wrb(μ),v;μ)r(v; \mu) = \ell(v; \mu) - a(w_{rb}(\mu), v; \mu).
  • Estimate the error using η(μ)=r(;μ)V0/αmin\eta(\mu) = \| r(\cdot; \mu) \|_{V_0'} / \alpha_{\min}, where αmin\alpha_{\min} is the coercivity constant.
  • Select μn+1\mu^{n+1} maximizing η\eta, solve for high-fidelity u(μn+1)u(\mu^{n+1}), orthonormalize, and enrich VrbV_{rb}.

This robust process ensures the trunk is physically meaningful and tailored to the problem manifold (Wang et al., 23 Nov 2025).

3. Label-Free Training via Projected Variational Residual

Training does not require full-field output supervision. With the trunk Ψ(x)\Psi(x) fixed, the branch network outputs RB coefficients cθ(μ)c_\theta(\mu), which are optimized by minimizing the projected Galerkin residual: R(θ;μ)=Frb(μ)Arb(μ)cθ(μ),L(θ)=1Nsj=1NsArb(μj)1/2R(θ;μj)22R(\theta; \mu) = F_{rb}(\mu) - A_{rb}(\mu) \, c_\theta(\mu), \quad L(\theta) = \frac{1}{N_s} \sum_{j=1}^{N_s} \| A_{rb}(\mu^j)^{-1/2} R(\theta; \mu^j) \|_2^2 where Arb(μ)=ΨTA(μ)ΨA_{rb}(\mu) = \Psi^T A(\mu) \Psi, Frb(μ)=ΨTF(μ)F_{rb}(\mu) = \Psi^T F(\mu). This structure enforces that the RB-DeepONet output approaches the RB-Galerkin solution for each μ\mu, guaranteeing that the physics and numerical structure are preserved (Wang et al., 23 Nov 2025).

4. Encoding of Boundary and Source Data

When boundary data gDg_D or source data ff vary independently of physical parameters, RB-DeepONet compresses these exogenous data via modal encodings:

  • Boundary modes: Constructed by Greedy selection in the Dirichlet-trace norm, yielding modes {ηn}\{\eta_n\} and coordinates bn=gD,ηnD,b_n = \langle g_D, \eta_n \rangle_{D,*}.
  • Source modes: Source functionals are mapped to their Riesz representers, with compact bases {Wf(m)}\{W_f^{(m)}\} and am=F(μ)[Wf(m)]a_m = F(\mu)[W_f^{(m)}]. These modes allow the right-hand side to be completely assembled from low-dimensional representations, making online costs independent of the full dimension of gDg_D and ff (Wang et al., 23 Nov 2025).

5. Offline–Online Computational Workflow

RB-DeepONet enforces a strict offline–online split:

  • Offline: RB trunk construction via the greedy algorithm; affine or EIM decomposition of operators A(μ),F(μ)A(\mu), F(\mu); computation of reduced blocks ApN=ΨTApΨA^N_p = \Psi^T A_p \Psi, FqN=ΨTFqF^N_q = \Psi^T F_q.
  • Online: Assembly of reduced matrices Arb(μ)A_{rb}(\mu), Frb(μ)F_{rb}(\mu); evaluation of the branch network cθ(μ)c_\theta(\mu); computation of the projected residual—all scaling with the RB dimension rr only. This decomposition removes dependence of online complexity on the full-order FE mesh size NhN_h, yielding scalable real-time evaluation (Wang et al., 23 Nov 2025).

6. Mathematical Guarantees

Under standard regularity assumptions (coercivity, Lipschitz parameter dependence, bounded hypothesis class capacity), RB-DeepONet satisfies the following:

  • Network approximation: limninfcNnccNL2(D)=0\lim_{n \to \infty} \inf_{c \in \mathcal{N}_n} \| c - c_N \|_{L^2(\mathcal{D})} = 0 (universal approximation in coefficient space).
  • Empirical–population uniform convergence: For finite samples and network size, empirical and population losses converge with high probability.
  • Consistency: The learned network coefficients converge to the true RB-Galerkin predictions as network size and sample count increase. Error in the overall surrogate separates RB truncation error from statistical error (Wang et al., 23 Nov 2025).

7. Numerical Performance and Practical Comparison

RB-DeepONet demonstrates competitive accuracy and dramatic efficiency across canonical parametric elliptic PDEs:

  • For a 2D linear conduction case (disk-in-domain, two parameters), RB-DeepONet achieves relative L2L^2 errors ≈ 5.0×1035.0 \times 10^{-3} with only r=3r=3 RB modes and 2105\sim 2 \cdot 10^5 network parameters, matching POD-DeepONet with r=3r=3 trunk and being within a small factor of intrusive RB-Galerkin errors.
  • For problems with independently encoded exogenous data (source/boundary), RB-DeepONet reduces encoding to rf=128r_f=128 source and rg=16r_g=16 boundary coefficients, and r=209r=209 RB modes—requiring only $337$ offline high-fidelity solves, versus 10410^4 for full POD trunk construction.
  • Online evaluation cost is O(r)O(r) or O(r2)O(r^2), representing 10210^210310^3 speedup over full-FE surrogates.

A summary table of computational costs and parameter counts from three test cases:

Example RB HF solves POD HF solves FEONet HF solves Trunk size rr Trainable Params (RB/POD) FEONet Params
Case 1 3 2100 0 3 ~2·10⁵ ~7.3·10⁵
Case 2 337 10,000 0 209 ~2.9·10⁵ ~1.25·10⁶
Case 3 5 4000 0 5 ~2·10⁵

In all evaluated scenarios, RB-DeepONet equaled or closely matched the accuracy of POD-DeepONet and FEONet while exhibiting significant efficiency and interpretability advantages (Wang et al., 23 Nov 2025).

8. Connections to RandONet and Further Directions

RB-DeepONet can be viewed as an interpretable variant of architectures such as RandONet (Fabiani et al., 8 Jun 2024), in which deterministic reduced-basis functions replace random features in the branch or trunk. The RB-DeepONet formulation retains universality by augmenting the RB trunk with sufficient flexibility in the branch network and could be extended to hybrid settings where RB components are combined with random or data-adaptive embeddings for multi-scale phenomena. Prospective research includes the use of kernel-POD or spectral embeddings, hybrid RB–randomized architectures, and adaptive identification of low-dimensional structure via regularization or multi-level decompositions (Fabiani et al., 8 Jun 2024).


RB-DeepONet fuses classical model-order reduction with modern operator learning, providing an efficient, stable, and interpretable surrogate for high-dimensional PDE systems with rigorous offline–online separation, certified error control, and superior computational efficiency relative to deep or fully data-driven alternatives (Wang et al., 23 Nov 2025).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to RB-DeepONet.