Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 133 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 441 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Physics-Informed DeepONets

Updated 25 October 2025
  • Physics-Informed DeepONets are neural architectures that learn mappings between infinite-dimensional spaces by embedding governing physical laws into the training process.
  • They utilize a dual-network design—branch and trunk nets—and incorporate physics-based loss terms to enhance data efficiency and generalization across various parameter regimes.
  • The approach offers significant computational speedups and improved accuracy, making it suitable for real-time simulations, design optimization, and complex PDE modeling.

Physics-Informed Deep Operator Networks (DeepONets) are neural architectures designed for learning nonlinear operators—maps between infinite-dimensional Banach spaces—under physical constraints. By embedding the governing laws of physical systems (typically partial differential equations, PDEs) directly into the training regime as soft penalty terms, Physics-Informed DeepONets combine the expressivity of operator learning with physics-based regularization. This enables sample-efficient, accurate, and physically consistent approximation of solution operators for parametric differential equations, with applications ranging from forward-inverse modeling to real-time simulation and design optimization. The architecture generalizes well across unseen inputs and provides marked improvements in both training data requirement and computational efficiency compared to traditional neural operator approaches.

1. DeepONet Architecture and Operator Learning

DeepONets are constructed to learn mappings from input functions uu (often sampled at fixed "sensor" points) to output functions ss evaluated at spatial or spatiotemporal coordinates yy. The architecture comprises two primary subnetworks:

  • Branch net: Encodes discretized input function values (u(x1),,u(xm))(u(x_1), \ldots, u(x_m)) into a feature vector.
  • Trunk net: Encodes the evaluation point yy into another feature vector.

The predicted function value at point yy for input uu is: Gθ(u)(y)=k=1qbk(u(x1),,u(xm))tk(y)G_\theta(u)(y) = \sum_{k=1}^q b_k(u(x_1), \ldots, u(x_m)) \cdot t_k(y) where (b1,,bq)(b_1, \ldots, b_q) are the branch net outputs; (t1,,tq)(t_1, \ldots, t_q) are the trunk net outputs. This structure leverages the universal approximation property for continuous nonlinear operators between Banach spaces, and offers independence from evaluation grid resolution (Wang et al., 2021, Goswami et al., 2022).

2. Physics-Informed Loss and Regularization

Physics-Informed DeepONets modify the standard data loss by introducing a physics loss term that enforces the output to satisfy the underlying physical laws. The total loss is: L(θ)=Loperator(θ)+Lphysics(θ)L(\theta) = L_\text{operator}(\theta) + L_\text{physics}(\theta)

  • Loperator(θ)L_\text{operator}(\theta): Conventional mean-squared error over available data.
  • Lphysics(θ)L_\text{physics}(\theta): Penalizes the PDE residual evaluated using the DeepONet prediction, typically via automatic differentiation.

For a general PDE N(u,s)=0N(u,s)=0, the physics loss is formulated as: Lphysics(θ)=i,jN(u(i),Gθ(u(i))(y(j)))2L_\text{physics}(\theta) = \sum_{i,j} \left| N(u^{(i)}, G_\theta(u^{(i)})(y^{(j)})) \right|^2 For example, in an anti-derivative operator learning problem, the loss explicitly penalizes the difference between the learned derivative and the input function over collocation points (Wang et al., 2021).

Automatic differentiation is critical for constructing these losses, allowing for efficient computation of required derivatives of the network output with respect to the output coordinates.

3. Data Efficiency and Generalization

A primary advantage of physics-informed DeepONets is a reduced reliance on large paired input–output datasets. By encoding the governing equations (and hence, much of the solution structure) into the loss, DeepONets can train on limited observed data, sometimes using only initial and boundary conditions as "anchors" for the learning process. The physics-based regularizer improves sample efficiency, enabling accurate operator modeling with up to 100% fewer data than conventional approaches (Wang et al., 2021).

Moreover, once trained, a physics-informed DeepONet predicts solutions for diverse parameter regimes (e.g., new boundary conditions or source terms) without retraining, directly mapping new inputs to the corresponding outputs in fractions of a second. This provides a computational speedup of up to three orders of magnitude compared to conventional PDE solvers in benchmark studies (Wang et al., 2021).

4. Numerical Experiments and Benchmarks

The effectiveness of Physics-Informed DeepONets is demonstrated across several parametric PDE classes:

PDE Class Key Physics-Infused Feature Accuracy Gain Notes
Anti-derivative ODE Enforces ds/dx=u(x)ds/dx = u(x) Orders of magnitude Physics loss resolves degeneracies
Diffusion-Reaction Residual for full PDE %%%%13yy14%%%% L2 error from 1.92% \to 0.45%
Burgers’ Equation Full spatio-temporal regularizer Orders of magnitude 10 ms per solution, 103×10^3\times faster
Eikonal Equation Signed distance operator Robustness Generalizes to complex geometries

For each, the addition of physics loss substantially decreased prediction errors and conferred superior consistency across families of parametric inputs. For instance, in Burgers’ equation, trained PI-DeepONets provided full spatio-temporal solutions to O(103)\mathcal{O}(10^3) problems in a fraction of a second—a dramatic speedup compared to spectral or finite difference solvers (Wang et al., 2021).

5. Regularization Mechanism and Automatic Differentiation

The regularization in PI-DeepONets exploits the differentiability of the trunk network output with respect to its coordinates. By backpropagating derivatives through the network (automatic differentiation), one evaluates the terms needed to construct the physics residual: R(x,t)=N(u(x),Gθ(u)(x,t))R(x, t) = N(u(x), G_\theta(u)(x, t)) Soft constraints enforce physics as penalties rather than hard (exact) satisfaction, providing flexibility while biasing solutions toward physical realism.

6. Applications and Implications

Due to their flexibility and computational efficiency, physics-informed DeepONets are applicable to a broad spectrum of scientific and engineering domains, including:

  • Real-time simulation: Enabling rapid surrogate evaluations critical for control and design.
  • Uncertainty quantification: Allowing direct propagation of input uncertainties through parametric operators.
  • Design optimization: Facilitating fast, gradient-based optimization in high-dimensional design spaces without repeated PDE solves.

The efficiency and robustness of these methods hold particular promise in scenarios where data generation is costly (e.g., CFD, experimental physics), and the governing physical laws are well-understood.

7. Implementation and Reproducibility

Publicly available code and datasets accompany the foundational work, establishing a transparent and extensible baseline for further research and practical deployment. The codebase implements the two-network DeepONet architecture, composite loss formulation, and example scripts for benchmarking on several canonical PDE problems:

Summary Table: Distinctive Features of Physics-Informed DeepONets

Feature Description
Operator learning Direct mapping between infinite-dimensional function spaces
Physics-infused loss PDE residual as soft regularizer via autograd
Data efficiency High accuracy with limited (or no paired) data
Speed Solutions to O(103)\mathcal{O}(10^3) PDEs in subsecond time
Adaptability Generalizes to unseen inputs after a single training phase

In summary, Physics-Informed DeepONets extend operator learning to a physics-constrained, data-efficient, and fast-inference regime suitable for a wide range of parametric PDE problems. Automatic differentiation-based regularization is central, and the approach is validated via multiple benchmarks and open-source implementations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Physics-Informed Deep Operator Networks (DeepONets).