Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerated nonlinear primal-dual hybrid gradient methods with applications to supervised machine learning (2109.12222v2)

Published 24 Sep 2021 in math.OC, cs.LG, math.ST, and stat.TH

Abstract: The linear primal-dual hybrid gradient (PDHG) method is a first-order method that splits convex optimization problems with saddle-point structure into smaller subproblems. Unlike those obtained in most splitting methods, these subproblems can generally be solved efficiently because they involve simple operations such as matrix-vector multiplications or proximal mappings that are fast to evaluate numerically. This advantage comes at the price that the linear PDHG method requires precise stepsize parameters for the problem at hand to achieve an optimal convergence rate. Unfortunately, these stepsize parameters are often prohibitively expensive to compute for large-scale optimization problems, such as those in machine learning. This issue makes the otherwise simple linear PDHG method unsuitable for such problems, and it is also shared by most first-order optimization methods as well. To address this issue, we introduce accelerated nonlinear PDHG methods that achieve an optimal convergence rate with stepsize parameters that are simple and efficient to compute. We prove rigorous convergence results, including results for strongly convex or smooth problems posed on infinite-dimensional reflexive Banach spaces. We illustrate the efficiency of our methods on $\ell_{1}$-constrained logistic regression and entropy-regularized matrix games. Our numerical experiments show that the nonlinear PDHG methods are considerably faster than competing methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Gabriel P. Langlois (8 papers)
  2. Jérôme Darbon (21 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.