Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Zeroth-Order Methods for Convex-Concave Minmax Problems: Applications to Decision-Dependent Risk Minimization (2106.09082v2)

Published 16 Jun 2021 in math.OC and cs.LG

Abstract: Min-max optimization is emerging as a key framework for analyzing problems of robustness to strategically and adversarially generated data. We propose a random reshuffling-based gradient free Optimistic Gradient Descent-Ascent algorithm for solving convex-concave min-max problems with finite sum structure. We prove that the algorithm enjoys the same convergence rate as that of zeroth-order algorithms for convex minimization problems. We further specialize the algorithm to solve distributionally robust, decision-dependent learning problems, where gradient information is not readily available. Through illustrative simulations, we observe that our proposed approach learns models that are simultaneously robust against adversarial distribution shifts and strategic decisions from the data sources, and outperforms existing methods from the strategic classification literature.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chinmay Maheshwari (20 papers)
  2. Chih-Yuan Chiu (20 papers)
  3. Eric Mazumdar (36 papers)
  4. S. Shankar Sastry (77 papers)
  5. Lillian J. Ratliff (59 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.