Papers
Topics
Authors
Recent
2000 character limit reached

Energy-Constrained Operator (ECO)

Updated 8 December 2025
  • Energy-Constrained Operator (ECO) is a framework that embeds a learnable quadratic energy function to enforce dissipativity in chaotic system modeling.
  • It guarantees global trajectory boundedness by integrating a closed-form quadratic projection layer that ensures finite-time convergence to an invariant set.
  • Empirical benchmarks on systems like Lorenz ‘63, Kuramoto–Sivashinsky, and Navier–Stokes validate ECO’s ability to deliver long-term stable rollouts with certified invariant approximations.

The Energy-Constrained Operator (ECO) is a data-driven framework for learning bounded, dissipative dynamics in chaotic systems. ECO is designed to guarantee global trajectory boundedness when modeling high-dimensional dissipative systems (such as those appearing in chaotic partial differential equations) using general parametric operators—including modern neural operator surrogates. The central mechanism is an algebraically imposed, learnable energy function that enforces dissipativity via a closed-form quadratic projection layer. This ensures that long-term rollouts of the learned dynamics remain within a contractive invariant set, enabling the reliable estimation of invariant statistical properties and yielding a tight, certified outer approximation of the strange attractor.

1. Formulation of Operator Learning for Dissipative Chaos

ECO addresses the problem of learning finite- or infinite-dimensional chaotic dynamical systems, represented generically as

  • Continuous time: x˙=F(x), xXRn\dot x = F(x),\ x\in\mathcal X\subseteq\mathbb R^n,
  • Discrete time: xt+1=F(xt), xtXRnx_{t+1}=F(x_t),\ x_t\in\mathcal X\subseteq\mathbb R^n.

Practical applications often require modeling a PDE,

wt+1(x)=G(wt(x),x),xX,w_{t+1}(x)=G(w_t(x),x), \qquad x\in\mathbb X,

discretized in space to yield wtRnw_t\in\mathbb R^n. The standard learning objective is to train a parametric operator G(θ)G^*(\theta)—such as a neural operator or DeepONet—to minimize the mean-squared error

Ldyn=1Ni=1NG(wi)wnext,i22\mathcal L_{\mathrm{dyn}} = \frac1N\sum_{i=1}^N\|G^*(w_i)-w_{\mathrm{next},i}\|_2^2

over a dataset {(wi,wnext,i)}i=1N\{(w_i, w_{{\rm next},i})\}_{i=1}^N. However, conventional approaches can yield finite-time blow-up in rollouts due to exponential sensitivity in chaotic systems. ECO directly embeds dissipativity into the parameterized operator to ensure statistical and boundedness guarantees.

2. Learnable Energy (Lyapunov) Function

To certify region-wise contraction and boundedness, ECO introduces a learnable quadratic "energy-like" (Lyapunov) function,

V(w)=(wwc)Q(wwc),QS++n, wcRn,V(w)= (w-w_c)^\top Q(w-w_c), \qquad Q\in\mathbb S^n_{++},\ w_c\in\mathbb R^n,

where QQ is positive definite (e.g., learned as a diagonal matrix) and wcw_c is a learned center. The sublevel set M(c)={w:V(w)c}M(c)=\{w: V(w)\le c\} defines a phase-space region such that, for suitable α<1\alpha<1, trajectories outside M(c)M(c) are forced to contract under the learned mapping, while trajectories inside are contained. This Lyapunov-based structure enables explicit certification of boundedness for all future iterates.

3. Algebraic Dissipation and Boundedness Condition

The required dissipative structure is enforced through a single algebraic inequality, shown to be necessary and sufficient for global asymptotic stability of M(c)M(c):

V(wt+1)α[V(wt)+ReLU(cV(wt))]0,0<α<1.V(w_{t+1})-\alpha\,\Bigl[V(w_t) + \mathrm{ReLU}\bigl(c-V(w_t)\bigr)\Bigr]\le0,\qquad 0<\alpha<1.

When V(wt)>cV(w_t)>c, the ReLU term vanishes, yielding strict contraction V(wt+1)αV(wt)V(w_{t+1})\le\alpha V(w_t). If V(wt)cV(w_t)\le c, the contraction enforces V(wt+1)αcV(w_{t+1})\le\alpha c, keeping the trajectory within or contracting toward M(c)M(c). This ensures that every trajectory enters M(c)M(c) in finite time and that once inside, it remains bounded. The result follows directly from Lyapunov-set and dissipation theory.

4. Closed-Form Quadratic Projection Layer

ECO achieves the dissipative update by composing standard neural or operator outputs with a closed-form quadratic projection:

w=ProjVb(w^)=γ(b,V(w^))w^+[1γ(b,V(w^))]wˉ,w^* = {\rm Proj}_{V\le b}(\hat w) = \gamma(b,V(\hat w))\,\hat w+\bigl[1{-}\gamma(b,V(\hat w))\bigr]\,\bar w,

where

b=α[V(w)+ReLU(cV(w))],γ(s,t)=sigmoid[k(st)],k1,b = \alpha \Bigl[ V(w) + \mathrm{ReLU}(c-V(w)) \Bigr], \qquad \gamma(s,t) = \mathrm{sigmoid}[k(s-t)], \quad k\gg1,

and wˉ\bar w is the explicit projection of w^\hat w onto the ellipsoid V(w)=bV(w)=b. Given Q=LLQ=LL^\top (Cholesky factorization), this projection is

wˉ=wc+b(L)1w^wcw^wc2.\bar w = w_c + \sqrt{b}\,(L^\top)^{-1} \frac{\hat w-w_c}{\| \hat w-w_c \|_2}.

The layer is differentiable and agnostic to the underlying network. Gradient-based alternative formulations are given by

ProjE(w^)=w^max{0,V(w^)b}V(w^)22V(w^).{\rm Proj}_E(\hat w) = \hat w - \frac{\max\{0, V(\hat w)-b\}}{\|\nabla V(\hat w)\|_2^2} \,\nabla V(\hat w).

This construction rigorously enforces the desired algebraic constraint and is essential to the ECO framework.

5. Theoretical Guarantees of Trajectory Boundedness

The central theoretical result establishes that the ECO-mapped dynamics, with an unconstrained emulator G^\hat G and the convex quadratic projection, provably maintain boundedness for all initial conditions. Fixing c>1/αc>1/\alpha and 0<α<[1+(2k+22k)1]20<\alpha<[1+(2k+2\sqrt{2k})^{-1}]^{-2}, Theorem 3.2 asserts:

  • (a) All trajectories {wt}\{w_t^*\} satisfy the dissipative constraint, hence converge in finite time to M(c)M(c).
  • (b) M(c)M(c) is globally asymptotically stable, and a non-asymptotic uniform bound holds:

supt0wtmax{V(w0),c}λmin(Q)<.\sup_{t\ge0}\|w^*_t\|\le\sqrt{\frac{\max\{V(w_0),c\}}{\lambda_{\min}(Q)}}<\infty.

The relaxation due to the sigmoid soft projection only enlarges M(c)M(c) by a controlled factor (1+δ)2(1+\delta)^2, with δ\delta set by kk and negligible as kk\to\infty.

6. Outer Approximation of Strange Attractors via Invariant Level Sets

The sublevel set M(c)={w:V(w)c}M(c)=\{ w: V(w)\le c \}, once trained, delivers a tight outer estimate of the true strange attractor—an object that is generally computationally intractable to describe explicitly. To minimize "outer error," the ECO loss incorporates a volume regularizer

Vol(M(c))cn/2/detQ,\mathrm{Vol}(M(c)) \propto c^{n/2}/\sqrt{\det Q},

yielding the total loss

L=Ldyn+λ1detQ,\mathcal L = \mathcal L_{\mathrm{dyn}} + \lambda\,\frac{1}{\sqrt{\det Q}},

with λ>0\lambda>0 encouraging ellipsoids of minimal volume that still contain all the long-term trajectories. This construction both regularizes the learned energy function and produces a certified outer bound.

7. Empirical Benchmarks and Long-Horizon Stability

ECO has been empirically validated across several canonical chaotic benchmarks:

  • Lorenz '63 System (3D ODE): An MLP with ECO’s projection yields stable 40,000-step rollouts, recovers the double-scroll attractor, and automatically learns an appropriate ellipsoidal outer approximation.
  • Kuramoto–Sivashinsky Equation (1D PDE): DeepONet equipped with ECO avoids trajectory blow-up for over 2,000 steps. The resulting output matches ground-truth physical and PCA-projected histograms, while unconstrained DeepONet diverges.
  • Navier–Stokes (2D, 64×6464\times64): DeepONet+ECO rollouts remain bounded for 10,000 steps, capture the ring-shaped PCA attractor structure, recover the multifractal energy spectrum, and achieve low Kullback-Leibler divergence in both physical (DKL0.06D_{KL} \approx 0.06) and PCA (DKL0.99D_{KL} \approx 0.99) spaces. Unconstrained baselines exhibit large errors.

The table below summarizes key quantitative results:

System/Task Baseline Divergence (DKLD_{KL}) ECO Divergence (DKLD_{KL})
Navier–Stokes (PCA) High (>>1.0) 0.99\approx 0.99
Navier–Stokes (phys) High (>>1.0) 0.06\approx 0.06
Kuramoto–Sivashinsky Diverges Matches GT

In all cases, the hard-constraint projection of ECO is essential for stable long-term forecasts and for accurately capturing the invariant measure of the underlying chaotic system (Goertzen et al., 1 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Energy-Constrained Operator (ECO).