One-Point Feedback for Composite Optimization with Applications to Distributed and Federated Learning (2107.05951v3)
Abstract: This work is devoted to solving the composite optimization problem with the mixture oracle: for the smooth part of the problem, we have access to the gradient, and for the non-smooth part, only the one-point zero-order oracle is available. For such a setup, we present a new method based on the sliding algorithm. Our method allows to separate the oracle complexities and to compute the gradient for one of the functions as rarely as possible. The paper also presents the applicability of our new method to the problems of distributed optimization and federated learning. Experimental results confirm the theory.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.