Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scalable Plug-and-Play ADMM with Convergence Guarantees (2006.03224v2)

Published 5 Jun 2020 in cs.LG, math.OC, and stat.ML

Abstract: Plug-and-play priors (PnP) is a broadly applicable methodology for solving inverse problems by exploiting statistical priors specified as denoisers. Recent work has reported the state-of-the-art performance of PnP algorithms using pre-trained deep neural nets as denoisers in a number of imaging applications. However, current PnP algorithms are impractical in large-scale settings due to their heavy computational and memory requirements. This work addresses this issue by proposing an incremental variant of the widely used PnP-ADMM algorithm, making it scalable to large-scale datasets. We theoretically analyze the convergence of the algorithm under a set of explicit assumptions, extending recent theoretical results in the area. Additionally, we show the effectiveness of our algorithm with nonsmooth data-fidelity terms and deep neural net priors, its fast convergence compared to existing PnP algorithms, and its scalability in terms of speed and memory.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yu Sun (226 papers)
  2. Zihui Wu (19 papers)
  3. Xiaojian Xu (19 papers)
  4. Brendt Wohlberg (48 papers)
  5. Ulugbek S. Kamilov (91 papers)
Citations (67)

Summary

We haven't generated a summary for this paper yet.