Papers
Topics
Authors
Recent
Search
2000 character limit reached

Almost Lyapunov Theory for Practical Stability

Updated 20 February 2026
  • Almost Lyapunov Theory is a framework that relaxes strict Lyapunov conditions by allowing small, controlled 'bad' regions to certify stability.
  • The theory extends classical stability results to stochastic and discrete-time systems through finite-step expected decrease conditions.
  • It underpins modern data-driven control and reinforcement learning approaches by ensuring practical convergence and robust invariance properties.

Almost Lyapunov Theory provides a rigorous framework to extend classical Lyapunov-based stability arguments to nonlinear, stochastic, and data-driven systems where the strict decrease condition on the Lyapunov function is permitted to fail on small subsets of the state space. This relaxation underpins both measure-theoretic and function-based approaches that certify practical stability, almost everywhere convergence, and robust invariance properties in modern control, dynamical systems, and reinforcement learning.

1. Foundational Definitions and Relaxed Lyapunov Functions

Classical Lyapunov theory relies on a continuously differentiable, positive-definite function V:Rn[0,)V : \mathbb{R}^n \to [0,\infty) such that V˙(x)=V(x)f(x)<0\dot V(x) = \nabla V(x)\cdot f(x) < 0 everywhere except the origin for the system x˙=f(x)\dot x = f(x), f(0)=0f(0) = 0 (Liu et al., 2018). This guarantees asymptotic stability.

Almost Lyapunov theory relaxes the strict negativity condition. Fix 0<c1<c20 < c_1 < c_2, set D={x:c1V(x)c2}D = \{x : c_1 \le V(x) \le c_2\} (compact), and define for decay rate a>0a > 0: Ω={xD:V˙(x)aV(x)}\Omega = \{x \in D : \dot V(x) \ge -a V(x)\} VV is an almost Lyapunov function with rate aa on DD if:

  • V˙(x)<aV(x)\dot V(x) < -a V(x) for all xDΩx \in D\setminus\Omega
  • V˙(x)aV(x)\dot V(x) \ge -a V(x) for all xΩx \in \Omega

Crucially, each connected component Ω\Omega^* of Ω\Omega must satisfy Ωϵ|\Omega^*| \le \epsilon for some ϵ>0\epsilon > 0. This allows VV to be non-decreasing or even increasing on small, arbitrarily shaped "bad sets" as long as their volume is prescribed to be sufficiently small (Liu et al., 2018, Chang et al., 2021).

In stochastic and discrete-time settings, analogous relaxations apply. For example, the almost Lyapunov criterion allows for only a decrease after a finite number of steps TT rather than at every step (Qin et al., 2019): E[V(xk+T)Fk]V(xk)αV(xk)\mathbb{E}[V(x_{k+T})|\mathcal{F}_k] \le V(x_k) - \alpha V(x_k) with TNT \in \mathbb{N}, α>0\alpha > 0.

2. Main Theorems and Invariance Results

The central conclusion is a practical convergence theorem [(Liu et al., 2018), Theorem 3.1]. Under local Lipschitzness, positivity, and appropriate decay and bad-set-volume conditions, the following holds:

  • For any x(0)=x0Dx(0)=x_0 \in D with V(x0)<c2hϵ1/ngϵV(x_0) < c_2 - h\epsilon^{1/n} - g\epsilon (explicit constants h,gh, g), the trajectory remains in DD for all t0t\ge 0 and enters the set {Vc1+hϵ1/n+gϵ}\{V \le c_1 + h\epsilon^{1/n} + g\epsilon\} for all sufficiently large tt.
  • As ϵ0\epsilon \to 0, the result reverts to that of strict Lyapunov conditions.

In stochastic settings, almost Lyapunov (finite-step) theorems yield that if VV exhibits an expected decrease only after finite steps, one obtains asymptotic or exponential stability in probability, with explicit rate estimates (Qin et al., 2019). For invariance, the forward-invariant set

R={x:V(x)<c2r(ϵ)}\mathcal{R} = \{ x : V(x) < c_2 - r(\epsilon) \}

remains invariant for all t0t \ge 0 and trajectories ultimately converge into a smaller inner strip around {Vc1}\{V\le c_1\} (Chang et al., 2021).

These principles apply to both deterministic and stochastic systems, continuous and discrete time, and extend to almost everywhere stability outside small violation sets (Vaidya, 2015, Karabacak et al., 2016).

3. Volume and Measure-Based Methods

A key innovation in almost Lyapunov analysis is the shift from strict pointwise negativity of V˙\dot V to a volumetric or measure-theoretic smallness of Ω\Omega, the set where the decrease is non-strict or violated. This is formalized via:

  • Explicit volume constraints on each connected component of Ω\Omega (Liu et al., 2018)
  • Lyapunov measures, where instead of functions, a measure μˉ\bar\mu is constructed such that the induced Markov operator shrinks mass outside a small neighborhood at each step (Vaidya, 2015)

The duality between function and measure perspectives is realized via the relationship between the Perron–Frobenius operator (propagating measures) and the Koopman operator (propagating functions) (Vaidya, 2015, Karabacak et al., 2016): Pμ,f=μ,Uf\langle \mathbb{P} \mu, f \rangle = \langle \mu, \mathbb{U} f \rangle In almost everywhere stability, Lyapunov density functions ρ\rho satisfy subinvariance properties under the Frobenius–Perron operator, enabling convergence results for almost all initial conditions (outside measure-zero sets) (Karabacak et al., 2016).

4. Computational and Data-Driven Extensions

Modern implementations, notably in control and reinforcement learning, fit the almost Lyapunov framework by:

  • Learning parameterized Lyapunov critic functions VθV_\theta via sample-based losses that approximate the Lie derivative and penalize violations only when their frequency (volume) is above threshold (Chang et al., 2021)
  • Certifying sample-based satisfaction of the almost Lyapunov conditions by estimating the violation set's volume on a grid or via Monte Carlo (Chang et al., 2021, Cheng et al., 29 Sep 2025)
  • Using diffusion models (e.g., S2S^2Diff) that sample entire control trajectories, biasing the generative process toward regions with negative Lie derivative except on a small set, thus realizing almost Lyapunov conditions in expectation (Cheng et al., 29 Sep 2025)

Tables of empirical metrics in these studies report small violation-set fractions and demonstrate invariance and stability across a variety of high-dimensional tasks.

5. Application Domains and Key Examples

Almost Lyapunov theory provides practical certification in scenarios where strict conditions are unattainable, due to model uncertainty, perturbations, or algorithmic approximations. Key application areas include:

  • Nonlinear systems where V˙\dot V is hard to bound globally or only verified on sampled data (Liu et al., 2018, Chang et al., 2021)
  • Stochastic systems (e.g., consensus under random communication graphs, distributed solvers under random asynchronous updating) where contraction can only be certified in blocks or in expectation (Qin et al., 2019)
  • Safe model-based control via diffusion policies in robotics and aerospace, with empirical evidence of strong safety and stability guarantees despite model-free or approximate learning (Cheng et al., 29 Sep 2025)

A concrete example (Liu et al., 2018): In R2\mathbb{R}^2, for

x˙=(λ(x)μ μλ(x))x\dot x = \begin{pmatrix} -\lambda(x) & -\mu \ \mu & -\lambda(x) \end{pmatrix} x

with V(x)=x2V(x)=\|x\|^2, inside a small disk Bρ(xc)B_\rho(x_c) one has V˙>0\dot V>0. By quantifying the disk's volume and applying the almost Lyapunov theorem, practical convergence to a neighborhood of the origin is rigorously certified.

6. Implications, Generalizations, and Open Directions

The almost Lyapunov paradigm fundamentally extends the class of systems for which stability can be certified, relaxing the classical requirement of strict negativity to a setting where only the "bad" set's measure matters. This approach is essential for:

  • Robustness to numerical and model-based errors in LfV(x)L_f V(x) calculations, as encountered in learning-based or data-driven methods (Chang et al., 2021)
  • Enabling systematic construction of certificate functions when strict Lyapunov candidates do not exist, e.g., via randomized sampling or convex optimization subject to volume constraints (Liu et al., 2018)
  • Achieving global results by iterating local practical convergence over level bands D(c)D(c), leading to global uniform asymptotic stability under uniform bad-set volume control (Liu et al., 2018)

Open problems highlighted include developing less conservative volume bounds (e.g., through variable tube radii), relaxing the non-vanishing vector field assumption, extending to invariant sets beyond the origin, and exploiting finer geometric characteristics (e.g., “thinness”) rather than pure volume (Liu et al., 2018).

In stochastic and high-dimensional settings, almost Lyapunov theory underpins operator-theoretic certificates of geometric decay and almost sure stability, with explicit algorithms for constructing Lyapunov measures and for finite-dimensional Markov approximations (“coarse stability”) (Vaidya, 2015).

7. Summary Table: Key Variants of Almost Lyapunov Theory

Setting Relaxed Condition Main Guarantee
Continuous-time nonlinear (deterministic) V˙<aV\dot V < -aV outside Ω\Omega, vol(Ω)ϵ\text{vol}(\Omega) \leq \epsilon Invariance and practical convergence (Liu et al., 2018)
Discrete-time stochastic E[V(xk+T)]V(xk)αV(xk)\mathbb{E}[V(x_{k+T})] \leq V(x_k) - \alpha V(x_k) Almost sure/exponential stability (Qin et al., 2019)
Measure-theoretic (a.e. stability) Lyapunov density ρ>0\rho > 0, Pρ<ρP\rho < \rho a.e. Convergence for Lebesgue almost all xx (Karabacak et al., 2016)
Data-driven/control via neural critics Lie derivative violated only on small volume sets Certified robust policy invariance and attraction (Chang et al., 2021, Cheng et al., 29 Sep 2025)

Almost Lyapunov theory, by leveraging volumetric or measure-theoretic constraints, robustly bridges Lyapunov stability with modern computational, stochastic, and data-driven dynamical systems (Liu et al., 2018, Vaidya, 2015, Qin et al., 2019, Chang et al., 2021, Cheng et al., 29 Sep 2025, Karabacak et al., 2016).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Almost Lyapunov Theory.