Zeroth-Order Mirror Descent
- Zeroth-Order Mirror Descent is a zero-order optimization approach that estimates gradients using noisy function evaluations with smoothing and finite-difference techniques.
- It leverages mirror maps and Bregman divergences to iteratively update solutions under non-Euclidean geometries, achieving concrete oracle-based convergence guarantees.
- The framework applies to high-dimensional, block-structured, and minimax problems, balancing bias-variance trade-offs to enable robust optimization in both convex and nonconvex settings.
Zeroth-Order Mirror Descent Framework
The Zeroth-Order Mirror Descent (ZOMD) framework generalizes classical mirror descent methods to settings where only function value—rather than gradient—information is available from potentially biased and noisy oracles. This paradigm enables efficient optimization over general convex (and, in some extensions, nonconvex or composite) objectives, including those with high-dimensional, block-structured, or distributionally-robust characteristics under non-Euclidean geometries. Zeroth-order mirror descent has attained prominence due to its concrete oracle-based convergence guarantees, flexibility in handling bias and structure, and robustness to the absence of differentiability in the objective or regularizer (Paul et al., 2023, Shao et al., 2022, Wang et al., 2017).
1. Problem Formulation and Oracle Structure
The canonical ZOMD setup involves the minimization of a convex or composite objective over a compact or structured set: where is typically convex or satisfies smoothness/regularity assumptions on (Paul et al., 2023, Shao et al., 2022, Gu et al., 2024, Yu et al., 2019).
The oracle provides only noisy, possibly biased function values: with noise variance . This structure applies in both convex and nonconvex (including minimax and block-structured) settings, facilitating general-purpose optimization in black-box, high-dimensional, or adversarial contexts (Paul et al., 2023, Gu et al., 2024, Yu et al., 2019).
2. Gradient Surrogates via Smoothing and Estimation
Smoothing a non-differentiable or noisy objective is achieved by convolution with a stochastic kernel, yielding a differentiable approximation: with generated from either a Gaussian (), Rademacher (), or spherical uniform distribution, depending on the geometry (Paul et al., 2023, Shao et al., 2022, Gu et al., 2024, Wang et al., 2017).
Gradient surrogates are computed by querying finite-difference values along random directions:
- Gaussian/Uniform smoothing (two-point):
- Rademacher smoothing (two-point, mini-batched):
with and independent samples (Shao et al., 2022).
Under suitable bias and variance control, these estimators satisfy , with determined by the bias term and smoothing parameter (Paul et al., 2023). In high dimensions or with structural assumptions (e.g., sparsity), Lasso-based de-biased estimators are used to maintain favorable scaling (Wang et al., 2017).
3. Mirror Map, Bregman Geometry, and Update Rule
Mirror descent is parameterized by a strongly convex "mirror map" or , with corresponding Bregman divergence: The ZOMD update with stochastic step-size is: In structured or block-coordinate variants, one defines block-separable distance generators and applies the (possibly blockwise) proximal mapping accordingly (Yu et al., 2019). For non-Euclidean domains, entropy-like potentials and geometry are employed to leverage intrinsic dimensionality (Shao et al., 2022, Wang et al., 2017).
In minimax or composite problems, primal and dual variables each receive tailored potentials, e.g., an entropy potential on the probability simplex and a Euclidean or non-Euclidean map on the hypothesis space (Gu et al., 2024).
4. Convergence Principles and Finite-Time Guarantees
A central result is almost-sure convergence to a neighborhood of optimality for convex : with (smoothing bias) and (oracle bias contribution) explicitly tied to , noise strength, and dimension (Paul et al., 2023). Finite-time concentration inequalities provide probabilistic bounds on deviations from the neighborhood after iterations, controlled by variance and step-size schedule (Paul et al., 2023, Gu et al., 2024).
In nonconvex settings, expected stationarity or generalized gradient mapping norms serve as the measure, with complexity scaling as or , reflecting the use of mini-batches, block sampling, or variance-reduction (Shao et al., 2022, Yu et al., 2019).
For structured problems (e.g., block coordinate or sparse high-dimensional), the use of random feature selection and Lasso-based debiasing under sparsity achieves convergence rates with only logarithmic dependence on the ambient dimension (Wang et al., 2017).
5. Deterministic and Advanced Variants
Recent research incorporates deterministic vector-field-driven mirror descent, replacing the stochastic surrogate with central finite difference schemes. The update is governed by: where is constructed deterministically using $2d+1$ function values per iteration. Trajectory-wise a posteriori certification provides verifiable last-iterate guarantees under relative-smoothness-type inequalities and punctured-neighborhood generalized star-convexity conditions (Hayashi, 31 Jan 2026). The error floor is explicitly resolution-dependent, and backtracking can be used to certify monotonic descent (Hayashi, 31 Jan 2026).
6. Specialized Extensions: Block, Composite, and Minimax Problems
The ZOMD framework supports block coordinate approaches and composite settings:
- Block-coordinate and composite objectives: The domain is partitioned, and updates are made selectively per block, enabling scalable optimization for high-dimensional and separable regularized objectives. Complexity achieves for -stationarity with a two-phase approach yielding high-probability bounds (Yu et al., 2019).
- Distributionally robust and minimax programs: ZO-SMD is adapted to minimax excess risk optimization, updating both model and dual variables via separate mirror maps, with optimal convergence of both excess risk estimation and minimax error in both smooth and nonsmooth regimes (Gu et al., 2024).
7. Parameter Tuning, Bias–Variance Trade-off, and Practical Considerations
Parameter choices for step sizes (, ), smoothing (), and batch size are critical for achieving the explicit bias-variance trade-off. Key principles:
- Smoothing parameter determines the bias (smoothing error or ) versus the variance (inflated as as ).
- For unbiased oracles, one may schedule slowly to obtain vanishing neighborhoods; under nonzero bias, the limiting error is minimized at a balancing value of (Paul et al., 2023).
- Step-size schedules must satisfy , . Commonly, is effective (Paul et al., 2023).
- In nonconvex and high-dimensional settings, mini-batching and feature selection reduce variance and computational cost without degraded rates (Shao et al., 2022, Wang et al., 2017).
- Adaptive step sizes can obviate the need for Lipschitz constants (Shao et al., 2022).
The practical implementation requires only function value oracles and is robust to noise and structural heterogeneity of the objective or constraint geometry.
References:
- "Robust Analysis of Almost Sure Convergence of Zeroth-Order Mirror Descent Algorithm" (Paul et al., 2023)
- "Adaptive Zeroth-Order Optimisation of Nonconvex Composite Objectives" (Shao et al., 2022)
- "Stochastic Zeroth-order Optimization in High Dimensions" (Wang et al., 2017)
- "Zeroth-Order Stochastic Mirror Descent Algorithms for Minimax Excess Risk Optimization" (Gu et al., 2024)
- "Deterministic Zeroth-Order Mirror Descent via Vector Fields with A Posteriori Certification" (Hayashi, 31 Jan 2026)
- "Zeroth-Order Stochastic Block Coordinate Type Methods for Nonconvex Optimization" (Yu et al., 2019)