Vector-Field-Driven Mirror Update
- The paper introduces a vector-field-driven generalization of mirror descent that replaces the gradient with an arbitrary vector field while preserving Bregman geometry.
- It details both data-driven learned mirror maps via input-convex neural networks and derivative-free finite-difference methods, emphasizing robustness and improved convergence.
- The study provides theoretical guarantees using relative smoothness and certificate-driven backtracking, offering practical insights into certified optimization algorithms.
A vector-field-driven mirror update is a generalization of the classical mirror descent algorithm in convex optimization, in which the usual gradient term is replaced by a potentially arbitrary vector field. This approach accommodates derivative-free optimization, learned vector fields, and domain-informed updates, while preserving the geometric structure induced by a mirror map and its associated Bregman divergence. The vector-field-driven mirror update framework subsumes both traditional mirror descent with explicit gradients and modern data-driven or zeroth-order variants, and provides interfaces for rigorous analysis, algorithmic certification, and practical implementation (Tan et al., 2022, Hayashi, 31 Jan 2026).
1. Mirror Descent and Its Vector-Field Generalization
In classical mirror descent, the iteration is defined for a convex set and a strictly convex, differentiable distance-generating function (mirror potential) , with the associated mirror map . The update for minimizing a convex differentiable objective is
- (Dual update) ,
- (Primal update) .
The framework replaces the gradient by a general vector field , yielding the generalized update: or equivalently,
where denotes the Fenchel-conjugate of . The Bregman divergence induced by is
This formulation allows the design of updates driven by functional data, finite-difference oracles, or learned vector fields (Tan et al., 2022, Hayashi, 31 Jan 2026).
2. Instantiations: Data-Driven and Derivative-Free Vector Fields
Two prominent instantiations of the vector-field-driven framework have been established.
a) Data-driven learned mirror maps: The mirror potential is parameterized by an input-convex neural network (ICNN), providing a convex and differentiable function with learnable parameters. The vector field replaces the Euclidean identity in gradient descent, adapting the geometry to data. Since the inverse mirror map is tractable only for simple , a separate neural network is trained to approximate this inverse (Tan et al., 2022).
b) Deterministic zeroth-order oracles: The vector field is constructed using deterministic central finite differences, e.g.
for , with $2d+1$ function value evaluations per iteration. Uniform dominance over cones around the direction of the true gradient is ensured via robust conic scaling, yielding a certified vector field appropriate for Bregman geometry (Hayashi, 31 Jan 2026).
3. Relative Smoothness and Trajectory-Wise Certification
Global and a posteriori guarantees for vector-field-driven mirror updates rely on relative smoothness-type inequalities with respect to the pair . Define the mixed Bregman-style discrepancy: The global relative smoothness condition requires
ensuring monotonic decrease of . In practice, the weaker trajectory-wise (a posteriori) property is certified along the realized iterates: If additionally, the objective is star-convex outside a punctured neighborhood of a minimizer (i.e., for ), the following last-iterate guarantee holds: Thus, the framework provides explicit last-iterate certificates even with nonstandard vector fields (Hayashi, 31 Jan 2026).
4. Learning Vector Fields and Mirror Potentials
Parameterizing the mirror potential via an ICNN, as in
with convex non-decreasing activations and elementwise nonnegativity, results in convex and -strongly convex (with added ) . Learning proceeds by minimizing a finite-horizon objective aggregating the empirical task loss and a forward-backward consistency penalty for the approximate inverse map . The effective update is: Empirical performance matches or exceeds classical gradient-based algorithms across SVMs, multi-class classifiers, and image inverse problems, with accelerated convergence when consistency is enforced and step-sizes are learned (Tan et al., 2022).
5. Robust Conic Dominance and Derivative-Free Mirror Descent
The finite-difference instantiation constructs a vector field , with from central differences and explicit scaling to achieve "robust conic dominance." For any cone about the gradient direction,
is enforced by solving a dominance problem over bounded-uncertainty sets (with from second differences). The α-scaling formula is provided in closed form. The resulting method ensures a generalized star-convexity interface except for a resolution-dependent exceptional set, outside which full certification is possible. Within a small neighborhood of the optimum, an error floor is explicitly characterized (Hayashi, 31 Jan 2026).
6. Theoretical Guarantees and Algorithmic Template
For the learned-mirror and finite-difference variants, the regret and last-iterate bounds are explicitly controlled by the geometry and the quality of vector field approximation:
- In the learned-mirror case, with backward map error bounded, the regret is
with sublinear or linear convergence up to an additive error (Tan et al., 2022).
- For certified finite-difference mirror descent, if star-convexity and trajectory certificates hold,
Practical implementation employs certificate-driven backtracking for step sizes and explicit construction of the scaled vector field according to the dominance criterion (Hayashi, 31 Jan 2026).
7. Connections and Empirical Evaluations
Vector-field-driven mirror descent includes as special cases: classical mirror descent, Blahut-Arimoto-type information-geometric algorithms, data-driven learned optimizers, and deterministic zeroth-order (finite-difference) methods. In extensive empirical evaluation, the learned mirror descent framework exhibits superior convergence compared to gradient descent and Adam across support vector machine classification, multi-class linear classification, and total variation-based image restoration tasks, often requiring significantly fewer iterations to reach comparable accuracy and loss metrics (Tan et al., 2022). The finite-difference variant exposes a hidden geometric structure linking Bregman telescoping identities, a posteriori certificate-driven analysis, and robust conic geometry (Hayashi, 31 Jan 2026).
References:
- Data-Driven Mirror Descent with Input-Convex Neural Networks (Tan et al., 2022)
- Deterministic Zeroth-Order Mirror Descent via Vector Fields with A Posteriori Certification (Hayashi, 31 Jan 2026)