Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Averaged Deep Denoisers for Image Regularization (2207.07321v3)

Published 15 Jul 2022 in eess.IV

Abstract: Plug-and-Play (PnP) and Regularization-by-Denoising (RED) are recent paradigms for image reconstruction that leverage the power of modern denoisers for image regularization. In particular, they have been shown to deliver state-of-the-art reconstructions with CNN denoisers. Since the regularization is performed in an ad-hoc manner, understanding the convergence of PnP and RED has been an active research area. It was shown in recent works that iterate convergence can be guaranteed if the denoiser is averaged or nonexpansive. However, integrating nonexpansivity with gradient-based learning is challenging, the core issue being that testing nonexpansivity is intractable. Using numerical examples, we show that existing CNN denoisers tend to violate the nonexpansive property, which can cause PnP or RED to diverge. In fact, algorithms for training nonexpansive denoisers either cannot guarantee nonexpansivity or are computationally intensive. In this work, we construct contractive and averaged image denoisers by unfolding splitting-based optimization algorithms applied to wavelet denoising and demonstrate that their regularization capacity for PnP and RED can be matched with CNN denoisers. To our knowledge, this is the first work to propose a simple framework for training contractive denoisers using network unfolding.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (58)
  1. IEEE Signal Process. Mag. 25(4), 84–99 (2008) (2) H.W. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems (Kluwer Academic Publishers, Dordrecht, Netherlands, 1996) (3) W. Dong, L. Zhang, G. Shi, X. Wu, Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process. 20(7), 1838–1857 (2011) (4) G. Jagatap, C. Hegde, Algorithmic guarantees for inverse imaging with untrained network priors. Proc. Adv. Neural Inf. Process. Syst. pp. 14,832–14,842 (2019) (5) S.H. Chan, X. Wang, O.A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H.W. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems (Kluwer Academic Publishers, Dordrecht, Netherlands, 1996) (3) W. Dong, L. Zhang, G. Shi, X. Wu, Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process. 20(7), 1838–1857 (2011) (4) G. Jagatap, C. Hegde, Algorithmic guarantees for inverse imaging with untrained network priors. Proc. Adv. Neural Inf. Process. Syst. pp. 14,832–14,842 (2019) (5) S.H. Chan, X. Wang, O.A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) W. Dong, L. Zhang, G. Shi, X. Wu, Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process. 20(7), 1838–1857 (2011) (4) G. Jagatap, C. Hegde, Algorithmic guarantees for inverse imaging with untrained network priors. Proc. Adv. Neural Inf. Process. Syst. pp. 14,832–14,842 (2019) (5) S.H. Chan, X. Wang, O.A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Jagatap, C. Hegde, Algorithmic guarantees for inverse imaging with untrained network priors. Proc. Adv. Neural Inf. Process. Syst. pp. 14,832–14,842 (2019) (5) S.H. Chan, X. Wang, O.A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S.H. Chan, X. Wang, O.A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  2. W. Dong, L. Zhang, G. Shi, X. Wu, Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process. 20(7), 1838–1857 (2011) (4) G. Jagatap, C. Hegde, Algorithmic guarantees for inverse imaging with untrained network priors. Proc. Adv. Neural Inf. Process. Syst. pp. 14,832–14,842 (2019) (5) S.H. Chan, X. Wang, O.A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Jagatap, C. Hegde, Algorithmic guarantees for inverse imaging with untrained network priors. Proc. Adv. Neural Inf. Process. Syst. pp. 14,832–14,842 (2019) (5) S.H. Chan, X. Wang, O.A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S.H. Chan, X. Wang, O.A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  3. IEEE Trans. Image Process. 20(7), 1838–1857 (2011) (4) G. Jagatap, C. Hegde, Algorithmic guarantees for inverse imaging with untrained network priors. Proc. Adv. Neural Inf. Process. Syst. pp. 14,832–14,842 (2019) (5) S.H. Chan, X. Wang, O.A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Jagatap, C. Hegde, Algorithmic guarantees for inverse imaging with untrained network priors. Proc. Adv. Neural Inf. Process. Syst. pp. 14,832–14,842 (2019) (5) S.H. Chan, X. Wang, O.A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S.H. Chan, X. Wang, O.A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  4. Proc. Adv. Neural Inf. Process. Syst. pp. 14,832–14,842 (2019) (5) S.H. Chan, X. Wang, O.A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S.H. Chan, X. Wang, O.A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  5. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017) (6) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Rond, R. Giryes, M. Elad, Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  6. J. Vis. Commun. Image Represent. 41, 96–108 (2016) (7) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.M. Bioucas-Dias, M. Figueiredo, Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  7. IEEE Trans. Image Process. 19(7), 1720–1730 (2010) (8) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  8. Physica D: nonlinear phenomena 60(1-4), 259–268 (1992) (9) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  9. J. Fourier Anal. Appl. 14(5), 877–905 (2008) (10) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) W. Dong, X. Li, L. Zhang, G. Shi, Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  10. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011) (11) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) L. Zhang, W. Zuo, Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  11. IEEE Signal Process. Mag. 34(5), 172–179 (2017) (12) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  12. G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979) (13) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. (Springer, New York, NY, USA, 2017) (14) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  13. E. Ryu, S. Boyd, Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  14. Appl. Comput. Math. 15(1), 3–43 (2016) (15) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  15. SIAM J. Imaging Sci. 2(1), 183–202 (2009) (16) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Sreehari, S.V. Venkatakrishnan, B. Wohlberg, G.T. Buzzard, L.F. Drummy, J.P. Simmons, C.A. Bouman, Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  16. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016) (17) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  17. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) (18) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  18. IEEE Trans. Image Process. 26(7), 3142–3155 (2017) (19) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  19. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019) (20) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, B. Wohlberg, U.S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  20. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019) (21) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, W. Zuo, S. Gu, L. Zhang, Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  21. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017) (22) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) T. Tirer, R. Giryes, Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  22. IEEE Trans. Image Process. 28(3), 1220–1234 (2019) (23) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  23. IEEE Trans. Pattern Anal. Mach. Intell. (2021) (24) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  24. Proc. Int. Conf. Learn. Represent. (2022) (25) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) S. Hurault, A. Leclaire, N. Papadakis, Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  25. arXiv:2201.13256 (2022) (26) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Romano, M. Elad, P. Milanfar, The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  26. SIAM J. Imaging Sci. 10(4), 1804–1844 (2017) (27) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) E.T. Reehorst, P. Schniter, Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  27. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018) (28) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, J. Liu, U.S. Kamilov, Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  28. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019) (29) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Sun, Z. Wu, X. Xu, B. Wohlberg, U.S. Kamilov, Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  29. IEEE Trans. Comput. Imaging 7, 849–863 (2021) (30) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, M. Elad, P. Milanfar, Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  30. SIAM J. Imaging Sci. 14(3), 1374–1406 (2021) (31) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C. Metzler, P. Schniter, A. Veeraraghavan, et al., prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  31. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018) (32) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, U.S. Kamilov, SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  32. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020) (33) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Z. Wu, Y. Sun, J. Liu, U. Kamilov, Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  33. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. Wkshp. (2019) (34) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) G. Mataev, P. Milanfar, M. Elad, DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  34. Proc. IEEE Intl. Conf. Comp. Vis. Wksh. (2019) (35) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) Y. Hu, J. Liu, X. Xu, U.S. Kamilov, Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  35. arXiv:2202.04961 (2022) (36) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) K. Gregor, Y. LeCun, Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  36. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010) (37) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Sun, H. Li, Z. Xu, et al., Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  37. Proc. Adv. Neural Inf. Process. Syst. 29 (2016) (38) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Zhang, B. Ghanem, ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  38. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018) (39) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Repetti, M. Terris, Y. Wiaux, J.C. Pesquet, Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  39. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022) (40) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) C.D. Athalye, K.N. Chaudhury, B. Kumar, On the contractivity of plug-and-play operators. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  40. IEEE Signal Processing Letters 30, 1447–1451 (2023) (41) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) P. Nair, R.G. Gavaskar, K.N. Chaudhury, Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  41. IEEE Trans. Comput. Imaging 7, 337–348 (2021) (42) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R.G. Gavaskar, C.D. Athalye, K.N. Chaudhury, On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  42. IEEE Trans. Image Process. 30, 4802–4813 (2021) (43) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Liu, S. Asif, B. Wohlberg, U. Kamilov, Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  43. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (44) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Raj, Y. Li, Y. Bresler, GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  44. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019) (45) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Cohen, Y. Blau, D. Freedman, E. Rivlin, It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  45. Proc. Adv. Neural Inf. Process. Syst. 34 (2021) (46) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.C. Pesquet, A. Repetti, M. Terris, Y. Wiaux, Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  46. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021) (47) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra, On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  47. Journal of Mathematical Imaging and Vision 65(1), 140–163 (2023) (48) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Virmaux, K. Scaman, Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  48. Proc. Adv. Neural Inf. Process. Syst. 31 (2018) (49) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) H. Sedghi, V. Gupta, P.M. Long, The singular values of convolutional layers. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  49. Proc. Int. Conf. Learn. Represent. (2019) (50) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Hertrich, S. Neumayer, G. Steidl, Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  50. Linear Algebra Appl. 631, 203–234 (2021) (51) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) M. Terris, A. Repetti, J.C. Pesquet, Y. Wiaux, Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  51. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020) (52) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) N. Parikh, S. Boyd, Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  52. Found. Trends. Optim. 1(3), 127–239 (2014) (53) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J. Eckstein, D.P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  53. Math. Program. 55(1), 293–318 (1992) (54) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) V. Pata, Fixed Point Theorem and Applications (Springer, 2019) (55) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  54. J.B. Baillon, G. Haddad, Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  55. Israel Journal of Mathematics 26, 137–150 (1977) (56) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Chambolle, R.A. De Vore, N.Y. Lee, B.J. Lucier, Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  56. IEEE Trans. Image Process. 7(3), 319–335 (1998) (57) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) D.P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  57. arXiv:1412.6980 (2014) (58) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009) A. Levin, Y. Weiss, F. Durand, W.T. Freeman, Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
  58. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)
Citations (8)

Summary

We haven't generated a summary for this paper yet.