Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bilevel learning of regularization models and their discretization for image deblurring and super-resolution (2302.10056v2)

Published 20 Feb 2023 in math.NA, cs.NA, and math.OC

Abstract: Bilevel learning is a powerful optimization technique that has extensively been employed in recent years to bridge the world of model-driven variational approaches with data-driven methods. Upon suitable parametrization of the desired quantities of interest (e.g., regularization terms or discretization filters), such approach computes optimal parameter values by solving a nested optimization problem where the variational model acts as a constraint. In this work, we consider two different use cases of bilevel learning for the problem of image restoration. First, we focus on learning scalar weights and convolutional filters defining a Field of Experts regularizer to restore natural images degraded by blur and noise. For improving the practical performance, the lower-level problem is solved by means of a gradient descent scheme combined with a line-search strategy based on the Barzilai-Borwein rule. As a second application, the bilevel setup is employed for learning a discretization of the popular total variation regularizer for solving image restoration problems (in particular, deblurring and super-resolution). Numerical results show the effectiveness of the approach and their generalization to multiple tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Tatiana A. Bubba (18 papers)
  2. Luca Calatroni (51 papers)
  3. Ambra Catozzi (1 paper)
  4. Serena Crisci (3 papers)
  5. Thomas Pock (72 papers)
  6. Monica Pragliola (18 papers)
  7. Siiri Rautio (5 papers)
  8. Danilo Riccio (4 papers)
  9. Andrea Sebastiani (9 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.