Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Curvature accelerated decentralized non-convex optimization for high-dimensional machine learning problems (2504.04073v1)

Published 5 Apr 2025 in math.OC

Abstract: We consider distributed optimization as motivated by machine learning in a multi-agent system: each agent holds local data and the goal is to minimize an aggregate loss function over a common model, via an interplay of local training and distributed communication. In the most interesting case of training a neural network, the loss functions are non-convex and the high dimension of the model poses challenges in terms of communication and computation. We propose a primal-dual method that leverages second order information in the local training sub-problems in order to accelerate the algorithm. To ease the computational burden, we invoke a quasi-Newton local solver with linear cost in the model dimension. Besides, our method is communication efficient in the sense of requiring to broadcast the local model only once per round. We rigorously establish the convergence of the algorithm and demonstrate its merits by numerical experiments.

Summary

We haven't generated a summary for this paper yet.