2000 character limit reached
Convergence Rate of a Penalty Method for Strongly Convex Problems with Linear Constraints (2004.13417v1)
Published 28 Apr 2020 in math.OC
Abstract: We consider an optimization problem with strongly convex objective and linear inequalities constraints. To be able to deal with a large number of constraints we provide a penalty reformulation of the problem. As penalty functions we use a version of the one-sided Huber losses. The smoothness properties of these functions allow us to choose time-varying penalty parameters in such a way that the incremental procedure with the diminishing step-size converges to the exact solution with the rate $O(1/{\sqrt k})$. To the best of our knowledge, we present the first result on the convergence rate for the penalty-based gradient method, in which the penalty parameters vary with time.