- The paper introduces two new zeroth-order algorithms (ZO-PDAPG and ZO-RMPDPG) that address nonconvex minimax problems with coupled constraints.
- It establishes iteration complexity bounds ranging from O(ε⁻⁴) to O(ε⁻³) in deterministic settings and O(ε⁻³) to O(ε⁻⁶.5) in stochastic scenarios.
- Numerical experiments in adversarial attack settings show that the proposed methods perform comparably to state-of-the-art first-order algorithms.
Overview of Zeroth-Order Algorithms for Nonconvex Minimax Problems
Introduction to the Study
This research focuses on critical problems prevalent in various fields such as machine learning and signal processing, particularly those relevant to adversarial attacks in machine learning. The paper explores zeroth-order algorithms for solving nonconvex minimax problems with coupled linear constraints under deterministic and stochastic environments. The authors introduce two novel algorithms: the zeroth-order primal-dual alternating projected gradient (ZO-PDAPG) algorithm and the zeroth-order regularized momentum primal-dual projected gradient (ZO-RMPDPG) algorithm. These algorithms address deterministic and stochastic nonconvex-(strongly) concave minimax problems, respectively. A notable achievement of this work is the establishment of iteration complexity bounds for the proposed algorithms to achieve an ε-stationary point.
Algorithm Development
The ZO-PDAPG and ZO-RMPDPG algorithms are single-loop approaches designed to tackle the stated minimax problems. These are zeroth-order algorithms, which means they do not require gradient information and instead rely on function value information. This attribute is particularly advantageous for black-box problems where gradient information is not readily available.
Complexity Analysis and Results
The research provides the iteration complexity for both the ZO-PDAPG and ZO-RMPDPG algorithms. To obtain an ε-stationary point, the ZO-PDAPG algorithm achieves iteration complexity bounds of O(ε−4) for nonconvex-strongly concave settings and O(ε−3) for nonconvex-concave settings in deterministic cases. For stochastic settings, the ZO-RMPDPG algorithm attains iteration complexity bounds of O(ε−3) for nonconvex-strongly concave settings and O(ε−6.5) for nonconvex-concave settings. These results represent the first established zeroth-order algorithms with theoretically guaranteed iteration complexity for the classes of minimax problems addressed.
Numerical Experiments
The paper presents numerical experiments that apply the proposed algorithms to adversarial attacks on network flow problems, comparing their performance against state-of-the-art first-order algorithms. The experiments evaluate the algorithms' effectiveness by measuring the relative cost increase due to adversarial attacks. Findings indicate that the ZO-PDAPG performs comparably to existing first-order methods, suggesting its practical relevance.
Conclusions
In sum, this paper contributes two zeroth-order algorithms with proven theoretical limits on their iteration complexity. It presents a significant advancement for solving classes of nonconvex minimax problems, often encountered in adversarial learning scenarios. These algorithms have the potential for broad application, given their efficacy without the need for gradient information. The authors' contributions offer new horizons in the field of optimization, and their methods are expected to become instrumental in robust machine learning applications.