Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prior Mismatch and Adaptation in PnP-ADMM with a Nonconvex Convergence Analysis (2310.00133v1)

Published 29 Sep 2023 in cs.CV

Abstract: Plug-and-Play (PnP) priors is a widely-used family of methods for solving imaging inverse problems by integrating physical measurement models with image priors specified using image denoisers. PnP methods have been shown to achieve state-of-the-art performance when the prior is obtained using powerful deep denoisers. Despite extensive work on PnP, the topic of distribution mismatch between the training and testing data has often been overlooked in the PnP literature. This paper presents a set of new theoretical and numerical results on the topic of prior distribution mismatch and domain adaptation for alternating direction method of multipliers (ADMM) variant of PnP. Our theoretical result provides an explicit error bound for PnP-ADMM due to the mismatch between the desired denoiser and the one used for inference. Our analysis contributes to the work in the area by considering the mismatch under nonconvex data-fidelity terms and expansive denoisers. Our first set of numerical results quantifies the impact of the prior distribution mismatch on the performance of PnP-ADMM on the problem of image super-resolution. Our second set of numerical results considers a simple and effective domain adaption strategy that closes the performance gap due to the use of mismatched denoisers. Our results suggest the relative robustness of PnP-ADMM to prior distribution mismatch, while also showing that the performance gap can be significantly reduced with few training samples from the desired distribution.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (75)
  1. “Convolutional neural networks for inverse problems in imaging: A review,” IEEE Signal Process. Mag., vol. 34, no. 6, pp. 85–95, 2017.
  2. “Using deep neural networks for inverse problems in imaging: Beyond analytical methods,” IEEE Signal Process. Mag., vol. 35, no. 1, pp. 20–36, Jan. 2018.
  3. “Deep learning techniques for inverse problems in imaging,” IEEE J. Sel. Areas Inf. Theory, vol. 1, no. 1, pp. 39–56, May 2020.
  4. “Plug-and-play priors for model based reconstruction,” in Proc. IEEE Global Conf. Signal Process. and Inf. Process., Austin, TX, USA, Dec. 3-5, 2013, pp. 945–948.
  5. “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Trans. Comput. Imaging, vol. 2, no. 4, pp. 408–423, Dec. 2016.
  6. “prDeep: Robust phase retrieval with a flexible deep network,” in Proc. 36th Int. Conf. Mach. Learn., Stockholmsmässan, Stockholm Sweden, Jul. 10–15 2018, pp. 3501–3510.
  7. “Learning deep CNN denoiser prior for image restoration,” in Proc. IEEE Conf. Comput. Vis. and Pattern Recognit., Honolulu, USA, July 21-26, 2017, pp. 3929–3938.
  8. “Learning proximal operators: Using denoising networks for regularizing inverse imaging problems,” in Proc. IEEE Int. Conf. Comp. Vis., Venice, Italy, Oct. 22-29, 2017, pp. 1799–1808.
  9. “Denoising prior driven deep neural network for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 10, pp. 2305–2318, Oct 2019.
  10. “Deep plug-and-play super-resolution for arbitrary blur kernels,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Long Beach, CA, USA, June 16-20, 2019, pp. 1671–1681.
  11. “Tuning-free plug-and-play proximal algorithm for inverse imaging problems,” in Proc. 37th Int. Conf. Mach. Learn., 2020.
  12. “Plug-and-play image restoration with deep denoiser prior,” IEEE Trans. Patt. Anal. and Machine Intell., pp. 1–1, 2021.
  13. “Plug-and-play ADMM for image restoration: Fixed-point convergence and applications,” IEEE Trans. Comp. Imag., vol. 3, no. 1, pp. 84–98, Mar. 2017.
  14. “The little engine that could: Regularization by denoising (RED),” SIAM J. Imaging Sci., vol. 10, no. 4, pp. 1804–1844, 2017.
  15. “Plug-and-play unplugged: Optimization free reconstruction using consensus equilibrium,” SIAM J. Imaging Sci., vol. 11, no. 3, pp. 2001–2020, Sep. 2018.
  16. “A convergent image fusion algorithm using scene-adapted Gaussian-mixture-based denoising,” IEEE Trans. Image Process., vol. 28, no. 1, pp. 451–463, Jan. 2019.
  17. “Plug-and-play methods for magnetic resonance imaging: Using denoisers for image recovery,” IEEE Sig. Process. Mag., vol. 37, no. 1, pp. 105–116, 2020.
  18. “Plug-and-play algorithms for large-scale snapshot compressive imaging,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2020, pp. 1447–1457.
  19. “Plug-and-play methods for integrating physical and learned models in computational imaging: Theory, algorithms, and applications,” IEEE Signal Process. Mag., vol. 40, no. 1, pp. 85–97, 2023.
  20. E. T. Reehorst and P. Schniter, “Regularization by denoising: Clarifications and new interpretations,” IEEE Trans. Comput. Imag., vol. 5, no. 1, pp. 52–67, Mar. 2019.
  21. “An online plug-and-play algorithm for regularized image reconstruction,” IEEE Trans. Comput. Imaging, vol. 5, no. 3, pp. 395–408, Sept. 2019.
  22. “Block coordinate regularization by denoising,” in Proc. Adv. in Neural Inf. Process. Syst. 33, Vancouver, BC, Canada, Dec. 2019, pp. 382–392.
  23. “Recovery analysis for plug-and-play priors using the restricted eigenvalue condition,” in Proc. Adv. Neural Inf. Process. Syst. 34, December 6-14, 2021, pp. 5921–5933.
  24. Z. Kadkhodaie and E. Simoncelli, “Stochastic solutions for linear inverse problems using the prior implicit in a denoiser,” Proc. Adv. Neural Inf. Process. Syst., vol. 34, pp. 13242–13254, 2021.
  25. “It has potential: Gradient-driven denoisers for convergent solutions to inverse problems,” in Proc. Adv. Neural Inf. Process. Syst. 34, 2021.
  26. “Gradient step denoiser for convergent plug-and-play,” in Int. Conf. on Learn. Represent., Kigali, Rwanda, May 1-5, 2022.
  27. “Bayesian imaging using plug & play priors: When Langevin meets Tweedie,” SIAM J. Imaging Sci., vol. 15, no. 2, pp. 701–737, 2022.
  28. “Scalable plug-and-play ADMM with convergence guarantees,” IEEE Trans. Comput. Imag., vol. 7, pp. 849–863, July 2021.
  29. J. Tang and M. Davies, “A fast stochastic plug-and-play ADMM for imaging inverse problems,” arXiv:2006.11630, 2020.
  30. “On the proof of fixed-point convergence for plug-and-play admm,” IEEE Signal Process. Lett., vol. 26, no. 12, pp. 1817–1821, 2019.
  31. S. H. Chan, “Performance analysis of plug-and-play admm: A graph signal processing perspective,” IEEE Trans. Comp. Imag., vol. 5, no. 2, pp. 274–286, June 2019.
  32. “Plug-and-play methods provably converge with properly trained denoisers,” in Proc. 36th Int. Conf. Mach. Learn., Long Beach, CA, USA, Jun. 09–15 2019, vol. 97, pp. 5546–5557.
  33. “Nonlinear total variation based noise removal algorithms,” Physica D, vol. 60, no. 1–4, pp. 259–268, Nov. 1992.
  34. A. Beck and M. Teboulle, “Fast gradient-based algorithm for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process., vol. 18, no. 11, pp. 2419–2434, November 2009.
  35. “Deep networks for image super-resolution with sparse prior,” in Proc. IEEE Int. Conf. Comp. Vis., Santiago, Chile, December 13-16, 2015, pp. 370–378.
  36. “Tomographic phase microscopy: Principles and applications in bioimaging,” J. Opt. Soc. Am. B, vol. 34, no. 5, pp. B64–B77, 2017.
  37. “A deep convolutional neural network using directional wavelets for low-dose x-ray CT reconstruction,” Med. Phys., vol. 44, no. 10, pp. e360–e375, 2017.
  38. “Frequency disentangled features in neural image compression,” arXiv:2308.02620, 2023.
  39. “Low-dose CT with a residual encoder-decoder convolutional neural network,” IEEE Trans. Med. Imag., vol. 36, no. 12, pp. 2524–2535, Dec. 2017.
  40. X. Xu and U. S. Kamilov, “Signprox: One-bit proximal algorithm for nonconvex stochastic optimization,” in IEEE Int. Conf. Acoustics, Speech and Signal Process., Brighton, UK, May 2019, pp. 7800–7804.
  41. “Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,” IEEE Signal Process. Mag., vol. 38, no. 2, pp. 18–44, Mar. 2021.
  42. J. Zhang and B. Ghanem, “ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit., 2018, pp. 1828–1837.
  43. “Model-based learning for accelerated, limited-view 3-d photoacoustic tomography,” IEEE Trans. Med. Imag., vol. 37, no. 6, pp. 1382–1393, 2018.
  44. “Deep equilibrium architectures for inverse problems in imaging,” IEEE Trans. Comput. Imag., vol. 7, pp. 1123–1133, 2021.
  45. “Online deep equilibrium learning for regularization by denoising,” in Proc. Adv. Neural Inf. Process. Syst., New Orleans, LA, 2022.
  46. T. Tirer and R. Giryes, “Image restoration by iterative denoising and backward projections,” IEEE Trans. Image Process., vol. 28, no. 3, pp. 1220–1234, Mar. 2019.
  47. “Building firmly nonexpansive convolutional neural networks,” in IEEE Int. Conf. Acoustics, Speech and Signal Process., 2020, pp. 8658–8662.
  48. “Convolutional proximal neural networks and plug-and-play algorithms,” Linear Algebra and its Appl., vol. 631, pp. 203–234, 2021.
  49. “Learning lipschitz-controlled activation functions in neural networks for plug-and-play image reconstruction methods,” in NeurIPS 2021 Workshop on Deep Learning and Inverse Problems, 2021.
  50. “Provable convergence of plug-and-play priors with mmse denoisers,” IEEE Signal Process. Lett., vol. 27, pp. 1280–1284, 2020.
  51. R. Gribonval, “Should penalized least squares regression be interpreted as maximum a posteriori estimation?,” IEEE Trans. Signal Process., vol. 59, no. 5, pp. 2405–2410, May 2011.
  52. “Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization,” in Int. Conf. Mach. Learn. PMLR, 2022, pp. 9483–9505.
  53. “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, July 2011.
  54. “Test-time training with self-supervision for generalization under distribution shifts,” in Int. Conf. Mach. Learn. PMLR, 2020, pp. 9229–9248.
  55. “Measuring robustness in deep learning based compressive sensing,” in Proc. 38th Int. Conf. Machine Learning (ICML), July 18-24, 2021, pp. 2433–2444.
  56. “Test-time training can close the natural distribution shift performance gap in deep learning based compressed sensing,” in Proc. 39th Int. Conf. Machine Learning (ICML), Baltimore, MD, USA, Jul 17-23, 2022, pp. 4754–4776.
  57. “Robust compressed sensing MRI with deep generative priors,” in Proc. Adv. Neural Inf. Process. Syst. 34, Dec 6-14, 2021, pp. 14938–14954.
  58. “Deep model-based architectures for inverse problems under mismatched priors,” IEEE J. Sel. Areas in Inf. Theory, vol. 3, no. 3, pp. 468–480, 2022.
  59. “RARE: Image reconstruction using deep priors learned without ground truth,” IEEE J. Sel. Topics Signal Process., vol. 14, no. 6, pp. 1088–1099, Oct. 2020.
  60. “Deep mean-shift priors for image restoration,” Proc. Adv. Neural Inf. Process. Syst., vol. 30, 2017.
  61. “Block coordinate plug-and-play methods for blind inverse problems,” arXiv:2305.12672, 2023.
  62. Z. Li and J. Li, “A simple proximal stochastic gradient method for nonsmooth nonconvex optimization,” Adv. in Neural Inf. Process. Syst., vol. 31, 2018.
  63. “Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis,” Comput. Optim. and Appl., vol. 72, no. 1, pp. 115–157, 2019.
  64. M. Yashtini, “Multi-block nonconvex nonsmooth proximal admm: Convergence and rates under kurdyka–łojasiewicz property,” Journal of Optim. Theory and Appl., vol. 190, no. 3, pp. 966–998, 2021.
  65. “Training generative adversarial networks with limited data,” Proc. Adv. Neural Inf. Process. Syst., vol. 33, pp. 12104–12114, 2020.
  66. “Deep learning face attributes in the wild,” in Proc. IEEE. Int. Conf. Comp. Vis., December 2015.
  67. “Stargan v2: Diverse image synthesis for multiple domains,” in Proc. IEEE Conf. Comput. Vis. and Pattern Recognit. (CVPR), 2020, pp. 8188–8197.
  68. “Rxrx1: A dataset for evaluating experimental batch correction methods,” in In Proc. IEEE Conf. Comput. Vis. Pattern Recognit, 2023, pp. 4284–4293.
  69. “Brecahad: a dataset for breast cancer histopathological annotation and diagnosis,” BMC research notes, vol. 12, no. 1, pp. 1–3, 2019.
  70. “Fast single image super-resolution using a new analytical solution for l2-l2 problems.,” IEEE Trans. Imag. Process., 2016.
  71. R. Gribonval and P. Machart, “Reconciling “priors” & “priors” without prejudice?,” in Proc. Adv. Neural Inf. Process. Syst. 26, Lake Tahoe, NV, USA, December 5-10, 2013, pp. 2193–2201.
  72. “Bayesian denoising: From MAP to MMSE using consistent cycle spinning,” IEEE Signal Process. Lett., vol. 20, no. 3, pp. 249–252, March 2013.
  73. R. Gribonval and M. Nikolova, “On Bayesian estimation and proximity operators,” Appl. Comput. Harmon. Anal., vol. 50, pp. 49–72, Jan. 2021.
  74. “Regularized Fourier ptychography using an online plug-and-play algorithm,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Process. (ICASSP), Brighton, UK, May 12-17, 2019, pp. 7665–7669.
  75. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. Int. Conf. on Learn. Represent., 2015.
Citations (3)

Summary

We haven't generated a summary for this paper yet.