Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 109 tok/s
Gemini 3.0 Pro 52 tok/s Pro
Gemini 2.5 Flash 159 tok/s Pro
Kimi K2 203 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Time-Integrated Deep Operator Networks

Updated 19 November 2025
  • TI-DeepONets are neural operator frameworks that approximate instantaneous time derivatives to enable surrogate modeling of dynamical systems and PDEs with temporal causality.
  • They integrate classical numerical schemes (e.g., Euler, RK4) with deep learning to mitigate error propagation and achieve robust long-horizon predictions.
  • Empirical results demonstrate an 81–98% reduction in relative L2 extrapolation error compared to full rollout and autoregressive approaches.

Time-Integrated Deep Operator Networks (TI-DeepONets) are neural operator frameworks that achieve stable, accurate, and temporally causal surrogate modeling of dynamical systems and partial differential equations (PDEs). Distinct from conventional DeepONet paradigms—full rollout and autoregressive mapping—TI-DeepONet reformulates the operator-learning objective to approximate instantaneous time derivatives, which are then integrated via classical numerical schemes. This separation of neural approximation and integration imposes a physics-informed, Markovian structure that substantially mitigates error propagation and enables reliability on long temporal horizons, including extrapolation far beyond the training interval (Nayak et al., 22 May 2025, Mandl et al., 7 Aug 2025, Sarkar et al., 12 Nov 2025).

1. Operator Learning Paradigm and Motivation

Classic DeepONet architectures learn operators that map an input function (such as an initial or boundary condition) into a target function (such as a time-evolved solution) via two subnetworks: a branch network for function encoding and a trunk network for querying the target domain (Lin et al., 2022). In temporal settings, state prediction is commonly framed in two ways:

  • Full rollout (FR): Learn u(x,t0)u(x,t)u(x, t_0) \mapsto u(x, t) for tt over a fixed interval. FR ignores temporal causality and is unable to generalize to tt outside the training domain.
  • Autoregressive (AR): Model unun+1u^n \mapsto u^{n+1} sequentially. AR suffers from accumulated error across steps, leading to instability in long-term prediction.

TI-DeepONet circumvents these issues by shifting the learning target to the operator generating the instantaneous time derivative ut=F(x,t,u,u,...)u_t = \mathcal{F}(x, t, u, \nabla u, ...), so that time evolution proceeds via integration (Euler, Runge–Kutta, Adler–Bashforth/Moulton, etc.), tightly coupling neural operator learning with numerical analysis (Nayak et al., 22 May 2025, Sarkar et al., 12 Nov 2025). This enforces temporal causality and renders the approach suitable for continuous-time and long-horizon tasks.

2. Architectural Features of TI-DeepONets

The canonical TI-DeepONet implements the neural operator Gθ\mathcal{G}_\theta to map the current solution field unu^n into the instantaneous derivative u˙n\dot{u}^n via a DeepONet structure:

  • Branch network: Ingests values {un(ηj)}j=1m\{u^n(\eta_j)\}_{j=1}^m at sensor points, producing coefficients bRpb \in \mathbb{R}^p.
  • Trunk network: Receives a query (x,t)(x, t) or xx, producing t(x)Rpt(x) \in \mathbb{R}^p.
  • Projection: The output is u˙n(x)i=1pbiti(x)\dot{u}^n(x) \approx \sum_{i=1}^p b_i t_i(x) (Nayak et al., 22 May 2025, Mandl et al., 7 Aug 2025, Sarkar et al., 12 Nov 2025).

Extended variants incorporate dual branches for control inputs (Sarkar et al., 12 Nov 2025), dual outputs to predict both current state and the tangent (time-derivative) field (Mandl et al., 7 Aug 2025), and trunk networks adaptable to spatial or spatio-temporal contexts.

Physics-informed versions (PITI-DeepONet) introduce additional loss heads to enforce PDE residuals, initial/boundary conditions, and consistency between network-predicted and autodifferentiated time derivatives (Mandl et al., 7 Aug 2025). TI(L)-DeepONet extends the core by learning the integration scheme’s coefficients via an auxiliary MLP, dynamically adapting Runge–Kutta weights to local dynamics (Nayak et al., 22 May 2025). This imparts further robustness in stiff or highly nonlinear regimes.

3. Embedded Time Integration Methods

Following neural operator inference, TI-DeepONet advances the field via a classical time-stepping integrator:

  • Forward Euler: un+1=un+ΔtGθ(un)u^{n+1} = u^n + \Delta t\, \mathcal{G}_\theta(u^n).
  • RK4:

k1=Gθ(un) k2=Gθ(un+Δt2k1) k3=Gθ(un+Δt2k2) k4=Gθ(un+Δtk3) un+1=un+Δt6(k1+2k2+2k3+k4)\begin{aligned} k_1 &= \mathcal{G}_\theta(u^n) \ k_2 &= \mathcal{G}_\theta(u^n + \frac{\Delta t}{2} k_1) \ k_3 &= \mathcal{G}_\theta(u^n + \frac{\Delta t}{2} k_2) \ k_4 &= \mathcal{G}_\theta(u^n + \Delta t\, k_3) \ u^{n+1} &= u^n + \frac{\Delta t}{6}(k_1 + 2k_2 + 2k_3 + k_4) \end{aligned}

TI(L)-DeepONet replaces fixed RK weights with data-driven combinations, with learnable weights α~i\tilde{\alpha}_i satisfying iα~i=1\sum_i \tilde{\alpha}_i = 1 determined via a softmax-parameterized MLP, yielding

un+1=un+Δti=14α~ikiu^{n+1} = u^n + \Delta t \sum_{i=1}^4 \tilde{\alpha}_i k_i

(Nayak et al., 22 May 2025). This adaptivity enables the integrator to mitigate neural approximation errors and respond to solution stiffness.

4. Training Protocols and Loss Functions

Training in TI-DeepONet frameworks involves sampling trajectories from numerical solvers and constructing dataset pairs (un,tun)(u^n, \partial_t u^n). The loss is typically mean squared error between predicted and true derivatives or next states (Nayak et al., 22 May 2025, Sarkar et al., 12 Nov 2025).

Physics-informed variants (PITI-DeepONet) augment the loss with:

  • PDE residuals: Enforce u^tn(x)\hat{u}_t^n(x) to match the PDE right-hand side evaluated on u^n\hat{u}^n.
  • Initial/boundary condition penalties
  • Consistency loss between u^tn\hat{u}_t^n and tu^n\partial_t \hat{u}^n via AD
  • Reconstruction errors u^nun2\|\hat{u}^n - u^n\|^2 (Mandl et al., 7 Aug 2025).

When used in optimal control (e.g., Differentiable Predictive Control, DPC (Sarkar et al., 12 Nov 2025)), the operator is pretrained and kept frozen. Policy learning proceeds by differentiating control loss through the surrogate-integrated dynamical system, leveraging automatic differentiation for efficient gradient estimation.

5. Performance Benchmarks and Comparison with Classical Schemes

TI-DeepONet and its variants are systematically benchmarked against FR and AR DeepONet baselines across canonical systems (Burgers’, KdV, Allen–Cahn, Lorenz, predator–prey, cart–pole):

  • Error reduction: TI-DeepONet achieves 81–98% reduction in mean relative L2L_2 extrapolation error over AR and FR approaches across various PDEs (Nayak et al., 22 May 2025, Mandl et al., 7 Aug 2025).
  • Stability horizon: Accurate prediction is sustained for up to twice the training interval; AR and FR baselines diverge rapidly outside the training domain (Nayak et al., 22 May 2025).
  • Robustness: Adaptive weights in TI(L)-DeepONet further reduce errors, especially in stiff/chaotic regimes (Nayak et al., 22 May 2025).
  • Control applications: In differentiable predictive control, policies trained with TI-DeepONet surrogates achieve terminal tracking errors O(104)\mathcal{O}(10^{-4}) for parabolic and reaction-diffusion PDEs, and 77% reduction in Burgers’ curvature cost, outperforming FDM-based rollouts in computational efficiency (Sarkar et al., 12 Nov 2025).
  • Physics-informing: Residual-based self-assessment (PITI-DeepONet) offers online error proxies highly correlated (Pearson ρ0.997\rho \approx 0.997) with true prediction errors (Mandl et al., 7 Aug 2025).

Representative L2L_2 Error Table (Extrapolation)

Problem Method TT^* Final Rel. L2L_2 Error
Burgers 1D TI(L)-DeepONet 0.0462
TI-DeepONet 0.0579
Full Rollout 0.3281
Autoregressive 1.7154
KdV 1D TI-DeepONet 0.1941
Burgers 2D TI-DeepONet 0.1736

Data: (Nayak et al., 22 May 2025); TT^* denotes final extrapolation time.

6. Extensions, Generalizations, and Limitations

TI-DeepONet formulations include:

  • Physics-informed extensions (PITI-DeepONet), enforcing tangent-space PDE structure and OOD state detection (Mandl et al., 7 Aug 2025).
  • Bayesian and LSTM-based local operator variants for ODE and irregular, real-time settings (B-LSTM-MIONet) with uncertainty quantification by replica-exchange SGLD (Kong et al., 2023).
  • Integration within differentiable control pipelines (TI-DeepONet+DPC), enabling offline policy optimization for PDE-constrained problems (Sarkar et al., 12 Nov 2025).
  • Learnable integrator coefficients (TI(L)-DeepONet) for state-adaptive time integration (Nayak et al., 22 May 2025).

Limitations include potential error accumulation in extremely long rollouts without regularization, ongoing computational overhead for very high-dimensional systems, and the need for careful hyperparameter selection in network and integrator design (Lin et al., 2022, Nayak et al., 22 May 2025). TI-DeepONet currently does not directly address stiff or implicit time-integration but formulation extensions for such settings are identified as promising directions (Lin et al., 2022).

7. Theoretical Guarantees and Outlook

Under standard assumptions (Lipschitz continuity, input discretization error, neural universal approximation), TI-DeepONet admits formal error bounds on both local and cumulative prediction error, with stronger stability properties than AR or FR DeepONet (Lin et al., 2022). The structure ensures that error growth is limited by the integrator stability properties, and embedding the numerical integrator into the learning loop further aligns gradient flow with long-term predictive goals (Nayak et al., 22 May 2025, Mandl et al., 7 Aug 2025).

Future directions include implicit/multistep integrator variants, Bayesian uncertainty quantification, hybrid physics/data training, and application to networked, high-dimensional, and multi-agent PDE systems (Lin et al., 2022, Kong et al., 2023, Mandl et al., 7 Aug 2025, Nayak et al., 22 May 2025, Sarkar et al., 12 Nov 2025). Combining the TI-DeepONet paradigm with model-based reinforcement learning and offline policy synthesis continues to drive advances in operator-based control of complex dynamical systems (Sarkar et al., 12 Nov 2025).

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Time-Integrated Deep Operator Networks (TI-DeepONets).