Deep Onsager Operator Learning (DOOL)
- Deep Onsager Operator Learning (DOOL) is an unsupervised operator learning framework that uses the Onsager variational principle to discover solution operators for dissipative PDEs.
- It employs a specialized architecture that decouples spatial and state dependencies via branch and trunk networks, enabling efficient explicit time-stepping for long-term predictions.
- The method minimizes a Rayleighian functional to enforce energy-dissipation balance, outperforming supervised models in both accuracy and temporal generalization.
Deep Onsager Operator Learning (DOOL) is an unsupervised operator learning methodology rooted in the Onsager variational principle, designed to discover solution operators for dissipative partial differential equations (PDEs) without recourse to labeled training data. DOOL trains deep operator networks by minimizing an Onsager-defined Rayleighian functional, decouples spatial and temporal dependencies at the architectural level, and employs explicit external time-stepping for temporal evolution. This framework enables efficient, structure-preserving learning of constitutive relations and time propagation laws, particularly for systems obeying dissipation and conservation, and extends to second-order dissipative wave models through a modified action-based strategy (Chang et al., 10 Aug 2025).
1. Theoretical Foundation: Onsager Variational Principle
At the heart of the DOOL approach is the Onsager variational principle (OVP), which describes dissipation-driven evolution in both conserved and nonconserved systems. For a conserved scalar field with an associated flux , the governing conservation law is
The system’s evolution is characterized by the minimization of the Rayleighian functional,
where is the free energy and is the local energy dissipation. Given the conservation constraint, the minimization is performed over , yielding the reduced Rayleighian
For nonconserved dynamics, e.g. the Allen–Cahn equation, the Rayleighian is minimized with respect to .
The network is thus trained to map to by minimizing on a collection of sampled state fields , thereby enforcing consistency with the underlying finite-dimensional thermodynamic structure and dissipation principles (Chang et al., 10 Aug 2025).
2. DOOL Architecture and Spatiotemporal Decoupling
DOOL employs a variant of deep operator networks characterized by a clear decoupling of spatial and input (state) dependencies:
- Branch Network: Accepts as input a representation of the solution field . Typically, this involves a coefficient vector from a truncated basis expansion (e.g., Fourier or Hermite), preserving global features and facilitating generalization.
- Trunk Network: Receives exclusively the spatial coordinate (not time), mapping it to local basis or feature functions.
The flux operator is parameterized as
mirroring the DeepONet structure but with exclusive spatial mapping in the trunk. By omitting temporal input into the operator, the architecture enables highly efficient training (due to lower input dimension per sample) and seamless application of explicit time-stepping in the solution update (Chang et al., 10 Aug 2025).
3. Unsupervised Variational Training
Unlike supervised DeepONet or MIONet frameworks, DOOL is trained by directly minimizing the discretized Rayleighian (or action) functional. Let be a collection of sampled state fields (from suitable initializations or a prior), the loss is
For each , outputs the corresponding flux . No flux labels are used: the only knowledge enforced is through the structure of the OVP, ensuring all operator predictions abide by energy-dissipation balance.
For time propagation, the solution is evolved using the externally discretized conservation law, e.g. forward Euler:
with the operator trained as described above (Chang et al., 10 Aug 2025).
4. Temporal Extrapolation and Generalization
Due to the explicit separation of spatial operator training and subsequent time-evolution, DOOL is not limited by the temporal support of the training data. Once the operator has been learned, it can be recursively applied through explicit time-stepping to generate solution trajectories well beyond the time interval spanned during operator construction.
Empirical results for canonical dissipative PDEs—including the heat equation, Fokker–Planck equation, and Cahn–Hilliard system—demonstrate that DOOL achieves high solution accuracy over both short and long time horizons, maintaining correct decay of free energy and agreement with theoretical dissipation predictions. This contrasts with supervised operator networks, which typically degrade or extrapolate poorly outside the training window (Chang et al., 10 Aug 2025).
5. Numerical Experiments and Comparative Assessment
DOOL is validated through direct experiments on multiple dissipative PDE systems:
- Heat Equation: Correctly recovers the constitutive law and enables evolution of with monotonic decay of energy.
- Fokker–Planck and Cahn–Hilliard Equations: Successfully recovers nontrivial constitutive relations with nonlinear dependence on , achieving time trajectories closely matching references and preserving mass or other invariants as appropriate.
- Allen–Cahn Equation (Nonconserved): Adapts the variational framework for direct time derivative learning, with similar advantages.
- Second-Order Wave Models: The framework is extended via a deep least-action method (DLAM), where the loss is defined as the action
Here, explicit normalization layers in the neural network ensure initial and terminal state matching. This variant demonstrates that the unsupervised operator learning paradigm can be broadened to non-Onsager variational principles for wave equations with dissipation (Chang et al., 10 Aug 2025).
DOOL systematically outperforms supervised DeepONet and MIONet in solution accuracy and time extrapolation on these benchmarks, while requiring significantly fewer labeled data and demonstrating superior efficiency in parameter and sample usage.
6. Extensions, Advantages, and Limitations
Key advantages of DOOL include:
- Unsupervised, physics-based training: Avoids dependence on high-fidelity simulation data, enabling deployment in settings where such data are expensive or unavailable.
- Spatiotemporal decoupling: Significantly lowers the sampling complexity for the trunk network and enhances training efficiency.
- Robust temporal generalization: Facilitates accurate extrapolation far in time due to external, physics-based time stepping.
- Systematic extensibility: Multiple input fields (e.g., initial conditions and model parameters) can be incorporated by extending the branch network, and modifications for nonstandard dissipative systems are readily formulated (e.g., via action-based losses for second-order models).
Notably, in the tested cases, the absence of explicit temporal dependency in the trunk network does not appear to constrain long-time solution accuracy or manifest pathologies in stiff/complex systems. Nevertheless, extension to highly nonlinear dynamics or systems not admitting variational formulations may require further methodological adaptation.
7. Mathematical Summary Table: DOOL Key Components and Equations
Component | Mathematical Expression | Notes |
---|---|---|
Conservation Law | Applies to conserved systems | |
Rayleighian (conserved) | OVP definition; reduced as | |
Operator Architecture | DeepONet structure; branch/trunk separation | |
Training Loss | Unsupervised, variational | |
Time Stepping | or | For conserved and nonconserved, respectively |
Action Loss (wave eq.) | Deep least-action method for second-order systems |
References
- Deep Onsager Operator Learning as presented in (Chang et al., 10 Aug 2025).