Papers
Topics
Authors
Recent
Search
2000 character limit reached

Continuous-Time Subgradient Trajectory

Updated 9 February 2026
  • Continuous-Time Subgradient Trajectory is an absolutely continuous solution to differential inclusions based on the Clarke subdifferential for locally Lipschitz functions.
  • It facilitates the analysis of convergence properties and asymptotic behavior in convex, nonconvex, and nonsmooth optimization via ergodic and Lyapunov methods.
  • It underpins applications such as robust PCA and phase retrieval by emphasizing the role of regularity and geometric properties for ensuring global convergence.

A continuous-time subgradient trajectory refers to an absolutely continuous solution x:[0,)Hx:[0,\infty) \to H (with HH a Hilbert or finite-dimensional space) of the differential inclusion

x˙(t)f(x(t))a.e. t>0,\dot x(t) \in -\partial f\bigl(x(t)\bigr) \quad \text{a.e. } t > 0,

where ff is typically taken as a locally Lipschitz function, and f\partial f is its Clarke subdifferential or, in convex settings, the subdifferential in the sense of convex analysis. These trajectories are pivotal in understanding both continuous-time optimization dynamics and their relationship with discrete-time subgradient methods, providing foundations for asymptotic analysis, convergence properties, and rates in convex, nonconvex, and nonsmooth optimization.

1. Differential Inclusion and Subgradient Dynamics

For a locally Lipschitz function f:RdRf:\mathbb{R}^d \to \mathbb{R}, the Clarke subdifferential at xx is

f(x):=conv{limkf(xk)    xkx,xk differentiable},\partial f(x) := \operatorname{conv}\left\{\lim_{k\to\infty}\nabla f(x_k)\;|\;x_k\to x, x_k \text{ differentiable}\right\},

which is a nonempty, convex, compact subset. Continuous-time subgradient trajectories satisfy

x˙(t)    f(x(t))\dot x(t)\;\in\;-\,\partial f\bigl(x(t)\bigr)

almost everywhere, and exist for any initial condition given the upper semicontinuity and boundedness of xf(x)x\mapsto\partial f(x) (Daniilidis et al., 2019). For convex ff, the subdifferential is maximal monotone, leading to stronger results such as uniqueness and convergence.

2. Asymptotic Behavior in Hilbert Spaces and Ergodic Results

For maximal monotone operators At:HHA_t: H \rightrightarrows H possibly varying in time, the evolution equation

x˙(t)+At(x(t))0\dot x(t) + A_t(x(t)) \ni 0

admits strong global solutions with notable asymptotic properties. Under the integral condition

(z,p)gphA,0GAt(z,p)dt<,\forall (z, p) \in \operatorname{gph} A_\infty, \qquad \int_0^\infty G_{A_t}(z, p)\,dt < \infty,

where GAtG_{A_t} is the Brézis–Haraux function, each trajectory exhibits weak ergodic convergence: 1T0Tx(s)dsx\frac{1}{T} \int_0^T x(s)\,ds \rightharpoonup x_\infty for some xx_\infty in the zero set of the limit operator AA_\infty (Attouch et al., 2016). In the autonomous case (AtAA_t \equiv A), this recovers classical results for monotone operator flows.

3. Convergence Frameworks and Rates

For convex subgradient flows (x˙(t)+ϕ(x(t))0\dot x(t) + \partial\phi(x(t)) \ni 0), convergence to minimizers is ensured under integrability conditions on the Brézis–Haraux function, or more generally, in the presence of tame geometry (semialgebraic, Whitney-stratifiable, or prox-regular functions). For functions ff that are locally Lipschitz and semialgebraic, each bounded subgradient trajectory has uniformly bounded length, and associated discrete subgradient methods with step sizes αk=O(1/k)\alpha_k = O(1/k) converge globally to critical points (Cai et al., 15 Jan 2026).

Specialized scenarios such as hierarchical minimization (multiscale flows with ϕt=Φ+β(t)Ψ\phi_t = \Phi + \beta(t)\Psi) or variational selection (e.g., Tikhonov regularization paths) are treated by careful analysis of the asymptotics, integrating time-dependent weights and coupling strengths (Attouch et al., 2016).

4. Pathological and Nonconvergent Dynamics

Not every locally Lipschitz ff guarantees convergence of its subgradient trajectories. There exist pathological constructions, such as functions built from splitting sets, which yield trajectories with:

  • Linear growth in f(γ(t))f(\gamma(t)) along the flow (failure of Lyapunov decrease),
  • Periodic orbits that never approach Clarke critical points, despite all trajectories being bounded (Daniilidis et al., 2019).

These results show that additional structure—such as convexity, subdifferential regularity, or tame geometry—is necessary to ensure asymptotic convergence, Lyapunov-type decrease, or avoidance of cycling behaviors.

5. Extensions to Nonsmooth and Nonconvex Models

In robust signal recovery, including models such as robust PCA and robust phase retrieval, subgradient dynamics for nonsmooth and nonconvex objectives may lack coercivity and standard descent properties. However, under the assumption that every continuous-time subgradient trajectory is bounded (which can be verified for certain models using descent and compactness or algebraic conditions), global convergence of the corresponding discrete subgradient method can be demonstrated for step schedules of the type αk=O(1/k)\alpha_k = O(1/k), provided the function is semialgebraic (Cai et al., 15 Jan 2026).

In particular, for rank-one symmetric robust PCA, subgradient trajectories and their discretizations almost surely avoid spurious critical points, converging globally from almost every initialization. This is shown using linear Lyapunov function arguments and avoidance of null sets associated with nonsmooth loci and spurious critical sets.

6. Fast Rates and Second-Order Dynamical Systems

Beyond first-order flows, accelerated convergence rates in nonsmooth convex optimization have been obtained through higher-order continuous-time dynamical systems with viscous and Hessian-driven damping and time-scaling, especially when formulated with the Moreau envelope and its gradient. Under appropriate smoothness and parameter growth criteria, trajectories of such systems exhibit

  • o(1/(t2b(t)))o(1/(t^2 b(t))) decay of the Moreau envelope value,
  • o(1/(tb(t)λ(t)))o(1/(t\sqrt{b(t)\lambda(t)})) decay in the gradient,
  • o(1/t)o(1/t) decay in the trajectory velocity, while their images under proximal maps weakly converge to minimizers in Hilbert space (Bot et al., 2022).

7. Illustrative Models and Applications

Key use cases include:

  • Multiscale hierarchical minimization (e.g., alternating between constraint satisfaction and secondary minimization),
  • Partial differential equations in variational form (e.g., coupled Φ-Ψ energy minimization in function space),
  • Robust low-rank matrix recovery and phase retrieval, where the structure of the flows, integrability of the Brézis–Haraux function, or verification of trajectory boundedness underpin the global convergence or characterization of limit sets (Attouch et al., 2016, Cai et al., 15 Jan 2026).

A schematic classification is summarized in the table below.

Setting Regularity requirement Long-time behavior
Convex, maximal monotone None beyond convexity Converges to minimizer
Semialgebraic/nonconvex, bounded traj. Tame geometry Converges to critical point
Lipschitz, no additional structure None May cycle/never reach critical points

A plausible implication is that the careful verification of regularity and geometric properties of the objective function stands as a necessary step in ensuring the convergence of continuous-time subgradient flows and the efficacy of their discrete analogues.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Continuous-Time Subgradient Trajectory.