Zeroth-Order Optimization (ZOO)
- Zeroth-Order Optimization (ZOO) is an approach that relies solely on function evaluations to solve minimization problems without explicit gradient information.
- It employs finite-difference methods using one-point or two-point estimators to approximate gradients in high-dimensional black-box settings.
- ZOO is widely applied in adversarial attacks, model-agnostic hyperparameter tuning, and reinforcement learning, demonstrating its practical significance across diverse domains.
Zeroth-Order Optimization (ZOO) encompasses a class of optimization algorithms where only function evaluations are available, with no access to explicit gradients. These methods are of central importance in large-scale black-box settings such as black-box adversarial attacks, model-agnostic hyperparameter tuning, reinforcement learning, and on-device neural network training, particularly when first-order (gradient-based) or second-order (Hessian-based) information is inaccessible or prohibitively expensive (Liu et al., 2020).
1. Problem Formulation and Fundamental Principles
Zeroth-Order Optimization targets minimization problems of the form
where the function (possibly stochastic or nonconvex) is accessible only through a black-box evaluation oracle. In the absence of explicit gradient information, ZOO algorithms estimate the gradient at each iterate via function queries, typically using finite-difference methods along random or coordinate directions (Ruan et al., 2019, Liu et al., 2020).
Canonical stochastic ZO estimators include:
- One-point estimator:
where is a random vector (e.g., Gaussian or uniform on the sphere), and is a normalizing factor.
- Two-point estimator:
[ g_{\text{2pt}}(x) = \