Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Approximate message-passing for convex optimization with non-separable penalties (1809.06304v1)

Published 17 Sep 2018 in stat.ML, cs.IT, cs.LG, and math.IT

Abstract: We introduce an iterative optimization scheme for convex objectives consisting of a linear loss and a non-separable penalty, based on the expectation-consistent approximation and the vector approximate message-passing (VAMP) algorithm. Specifically, the penalties we approach are convex on a linear transformation of the variable to be determined, a notable example being total variation (TV). We describe the connection between message-passing algorithms -- typically used for approximate inference -- and proximal methods for optimization, and show that our scheme is, as VAMP, similar in nature to the Peaceman-Rachford splitting, with the important difference that stepsizes are set adaptively. Finally, we benchmark the performance of our VAMP-like iteration in problems where TV penalties are useful, namely classification in task fMRI and reconstruction in tomography, and show faster convergence than that of state-of-the-art approaches such as FISTA and ADMM in most settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Andre Manoel (21 papers)
  2. Florent Krzakala (179 papers)
  3. Gaël Varoquaux (87 papers)
  4. Bertrand Thirion (71 papers)
  5. Lenka Zdeborová (182 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.