Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PRECISION: Decentralized Constrained Min-Max Learning with Low Communication and Sample Complexities (2303.02532v1)

Published 5 Mar 2023 in cs.LG and cs.DC

Abstract: Recently, min-max optimization problems have received increasing attention due to their wide range of applications in ML. However, most existing min-max solution techniques are either single-machine or distributed algorithms coordinated by a central server. In this paper, we focus on the decentralized min-max optimization for learning with domain constraints, where multiple agents collectively solve a nonconvex-strongly-concave min-max saddle point problem without coordination from any server. Decentralized min-max optimization problems with domain constraints underpins many important ML applications, including multi-agent ML fairness assurance, and policy evaluations in multi-agent reinforcement learning. We propose an algorithm called PRECISION (proximal gradient-tracking and stochastic recursive variance reduction) that enjoys a convergence rate of $O(1/T)$, where $T$ is the maximum number of iterations. To further reduce sample complexity, we propose PRECISION$+$ with an adaptive batch size technique. We show that the fast $O(1/T)$ convergence of PRECISION and PRECISION$+$ to an $\epsilon$-stationary point imply $O(\epsilon{-2})$ communication complexity and $O(m\sqrt{n}\epsilon{-2})$ sample complexity, where $m$ is the number of agents and $n$ is the size of dataset at each agent. To our knowledge, this is the first work that achieves $O(\epsilon{-2})$ in both sample and communication complexities in decentralized min-max learning with domain constraints. Our experiments also corroborate the theoretical results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Zhuqing Liu (18 papers)
  2. Xin Zhang (904 papers)
  3. Songtao Lu (60 papers)
  4. Jia Liu (369 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.