Faster Asynchronous Nonconvex Block Coordinate Descent with Locally Chosen Stepsizes (2203.11307v1)
Abstract: Distributed nonconvex optimization problems underlie many applications in learning and autonomy, and such problems commonly face asynchrony in agents' computations and communications. When delays in these operations are bounded, they are called partially asynchronous. In this paper, we present an uncoordinated stepsize selection rule for partially asynchronous block coordinate descent that only requires local information to implement, and it leads to faster convergence for a class of nonconvex problems than existing stepsize rules, which require global information in some form. The problems we consider satisfy the error bound condition, and the stepsize rule we present only requires each agent to know (i) a certain type of Lipschitz constant of its block of the gradient of the objective and (ii) the communication delays experienced between it and its neighbors. This formulation requires less information to be available to each agent than existing approaches, typically allows for agents to use much larger stepsizes, and alleviates the impact of stragglers while still guaranteeing convergence to a stationary point. Simulation results provide comparisons and validate the faster convergence attained by the stepsize rule we develop.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.