Regularization-by-Denoising (RED)
- Regularization-by-Denoising (RED) is a framework that integrates a denoising operator into an explicit, image-adaptive Laplacian regularizer for solving inverse imaging problems.
- The mathematical formulation combines a data fidelity term with a clear, closed-form gradient of the RED regularizer, ensuring well-defined optimization.
- RED supports diverse optimization strategies—gradient descent, ADMM, and fixed-point iteration—resulting in robust image restoration performance and theoretical convergence guarantees.
Regularization-by-Denoising (RED) is a framework for solving inverse problems by explicitly constructing a regularization functional from an image denoising operator. Unlike earlier plug-and-play approaches that use denoisers as implicit priors within an iterative optimization process, RED forms an explicit, image-adaptive Laplacian regularizer driven by the denoiser, which enables well-defined optimization and convergence guarantees under suitable conditions.
1. Conceptual Foundations and Distinction from Existing Plug-and-Play Priors
The RED framework fundamentally inverts the conventional use of denoisers in image inverse problems. While methods such as the Plug-and-Play Prior (P³) inject denoising operators into alternating direction methods (notably ADMM) as implicit priors—leading to a chained denoising interpretation but without an explicit regularization function—RED directly defines a regularization functional by embedding the denoising engine in the energy functional. This construction results in an explicit, image-adaptive Laplacian regularizer: where is the image and is the denoised output.
The primary distinctions from P³ are:
- P³ lacks an explicit optimization objective and relies on implicit regularization via variable splitting and denoising steps, making parameter tuning intricate and theoretical convergence less direct.
- RED provides a concrete cost function and closed-form gradient, facilitating flexible iteration and optimization beyond ADMM, and making possible a variety of gradient-based minimization methods (Romano et al., 2016).
2. Mathematical Formulation and Properties
The overall RED objective function for image recovery from observations is: where encodes data fidelity (e.g., an norm for Gaussian noise in the forward model), and weights the regularization.
The explicit gradient of this objective, under local homogeneity of (i.e., ) and strong passivity (the Jacobian has spectral radius at most 1), is: This gradient depends only on the denoising residual and requires merely one denoising invocation per evaluation. The strong passivity condition ensures the Hessian of the regularization term is positive semidefinite, leading to convexity of the regularizer and, with a convex , of the entire cost.
3. Optimization Strategies
RED's explicit regularization and gradient admit various optimization schemes:
- Steepest Descent / Gradient Descent: The RED update is simply:
with step size set by line search or preset for convex objectives. Variants such as conjugate gradient and SESOP are also deployable.
- ADMM (Alternating Direction Method of Multipliers): An auxiliary variable decouples and , resulting in updates in which the "v-update" solves a minimization involving the explicit RED term, typically done approximately to maintain efficiency and preserve the explicit objective.
- Fixed-Point Iteration: The first-order stationarity condition,
can be solved directly (e.g., for quadratic , leading to a closed-form blending of Wiener filtering and denoising).
All these techniques benefit from the well-defined, explicit gradient structure of RED, freeing the solver from reliance on ADMM splitting and variable-specific denoising steps.
4. Practical Applications and Numerical Performance
RED is particularly effective in classical inverse imaging tasks, exemplified by:
- Image Deblurring: Formulating as , where is the blur operator.
- Single Image Super-Resolution: Incorporating both blur and down-sampling within the imaging model.
In these settings, using advanced denoisers (e.g., Trainable Nonlinear Reaction Diffusion—TNRD) within the RED framework yields restoration results that are competitive with, or slightly superior to, leading methods such as NCSR and IDD-BM3D. Quantitative improvements are reflected in state-of-the-art PSNR values and qualitative results exhibit enhanced edge sharpness and preservation of fine details (Romano et al., 2016).
RED’s excellent performance holds even with simple denoisers (e.g., median filters), which produce appreciably better outputs than naive solutions (like bicubic interpolation in super-resolution), though the gap with advanced denoisers remains significant.
A comparative summary of optimizer variants is:
Optimization Approach | Convergence Speed | Objective Value | PSNR/Quality |
---|---|---|---|
Steepest Descent | Moderate | Near global | Consistent |
ADMM | Typically faster | Near global | Consistent |
Fixed-Point | Fastest (often) | Near global | Consistent |
All converged to comparable quality, though implementation and iteration cost per method differ.
5. Theoretical Guarantees and Limitations
RED’s explicit regularization and convexity underpin strong theoretical guarantees:
- When the denoiser is locally homogeneous and strongly passive (spectral radius of ), the regularizer is convex and the composite cost function is convex if is convex.
- Under these conditions, any iterative minimization will converge to the unique global optimum, a property that sets RED apart from ADMM-based P³, which may only guarantee stationary-point convergence and often requires elaborate parameter tuning.
The stationarity condition directly pulls the solution toward points where , i.e., denoised and original images are aligned, corresponding to a clean image under the learned prior.
Potential limitations arise when denoiser properties (local homogeneity, strong passivity) are violated—e.g., for certain classes of learned or nonlocal denoisers—where convexity or the explicit gradient formula may no longer hold, raising questions about global optimality and requiring further theoretical investigation.
6. Significance and Impact
RED provides a systematic and flexible approach for leveraging advanced denoising algorithms as regularizers in inverse imaging. Its explicit cost function, efficient and well-behaved gradient computation, and robust convergence properties under reasonable denoiser assumptions distinguish it from earlier plug-and-play technologies.
By decoupling the optimization procedure from the denoising engine—allowing any gradient-based or splitting method—and providing a path to theoretical and empirical state-of-the-art performance in tasks such as deblurring and super-resolution, RED sets a foundation for modular and adaptable algorithm design in modern image restoration and related inverse problems. Its influence extends to subsequent theoretical developments, clarifications, and algorithmic accelerations within the RED and plug-and-play literature.