Solving a Class of Non-Convex Minimax Optimization in Federated Learning (2310.03613v1)
Abstract: The minimax problems arise throughout machine learning applications, ranging from adversarial training and policy evaluation in reinforcement learning to AUROC maximization. To address the large-scale data challenges across multiple clients with communication-efficient distributed training, federated learning (FL) is gaining popularity. Many optimization algorithms for minimax problems have been developed in the centralized setting (\emph{i.e.} single-machine). Nonetheless, the algorithm for minimax problems under FL is still underexplored. In this paper, we study a class of federated nonconvex minimax optimization problems. We propose FL algorithms (FedSGDA+ and FedSGDA-M) and reduce existing complexity results for the most common minimax problems. For nonconvex-concave problems, we propose FedSGDA+ and reduce the communication complexity to $O(\varepsilon{-6})$. Under nonconvex-strongly-concave and nonconvex-PL minimax settings, we prove that FedSGDA-M has the best-known sample complexity of $O(\kappa{3} N{-1}\varepsilon{-3})$ and the best-known communication complexity of $O(\kappa{2}\varepsilon{-2})$. FedSGDA-M is the first algorithm to match the best sample complexity $O(\varepsilon{-3})$ achieved by the single-machine method under the nonconvex-strongly-concave setting. Extensive experimental results on fair classification and AUROC maximization show the efficiency of our algorithms.
- Xidong Wu (13 papers)
- Jianhui Sun (14 papers)
- Zhengmian Hu (23 papers)
- Aidong Zhang (49 papers)
- Heng Huang (189 papers)