Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Huber Loss-Based Penalty Approach to Problems with Linear Constraints (2311.00874v1)

Published 1 Nov 2023 in math.OC

Abstract: We consider a convex optimization problem with many linear inequality constraints. To deal with a large number of constraints, we provide a penalty reformulation of the problem, where the penalty is a variant of the one-sided Huber loss function with two penalty parameters. We study the infeasibility properties of the solutions of penalized problems for nonconvex and convex objective functions, as the penalty parameters vary with time. Then, we propose a random incremental penalty method for solving the original problem, and investigate its convergence properties for convex and strongly convex objective functions. We show that the iterates of the method converge to a solution of the original problem almost surely and in expectation for suitable choices of the penalty parameters and the stepsize. Also, we establish convergence rate of the method in terms of the expected function values by utilizing appropriately defined weighted averages of the iterates. We show $O(\ln{1/2+\epsilon} k/{\sqrt k})$-convergence rate when the objective function is convex and $O(\ln{\epsilon} k/k)$-convergence rate when the objective function is strongly convex, with $\epsilon>0$ being an arbitrarily small scalar. } To the best of our knowledge, these are the first results on the convergence rate for the penalty-based incremental subgradient method with time-varying penalty parameters.

Citations (1)

Summary

We haven't generated a summary for this paper yet.