Papers
Topics
Authors
Recent
2000 character limit reached

Half-Quadratic Regularization

Updated 8 October 2025
  • Half-quadratic regularization is an optimization technique that reformulates non-convex regularization problems using implicit concave functions and auxiliary variables.
  • It transforms complex signal and image reconstruction tasks into convex or quadratic subproblems, enabling efficient block coordinate descent algorithms.
  • This approach preserves important features such as edges and sparsity, making it highly effective for applications like denoising and deblurring.

Half-quadratic regularization is an optimization technique that alleviates the difficulties posed by non-convex or non-smooth regularizers, especially in signal and image reconstruction. The method achieves this by reformulating the original objective into an augmented problem through auxiliary variable introduction and functional transformations, often exploiting concave structures. This results in subproblems that are convex or quadratic with respect to one block of variables, making the overall problem amenable to efficient block coordinate descent or alternating minimization algorithms.

1. Mathematical Foundations of Half-Quadratic Regularization

A half-quadratic regularization problem often arises when reconstructing a signal xx from measurements bb, modeled as: minxf(x)=Data Fidelity Term+βRegularization Term,\min_{x} f(x) = \text{Data Fidelity Term} + \beta \cdot \text{Regularization Term}, where the regularizer is typically non-smooth and/or non-convex, designed to preserve edges or sparsity.

A central construct is the use of implicit concave functions, defined as compositions VΦV \circ \Phi where VV is strictly concave and differentiable, and Φ\Phi is a continuously differentiable mapping, commonly Φ(t)=t2\Phi(t) = t^2 in edge-preserving regularization scenarios (Latorre, 7 Oct 2025). Many popular regularizers in image processing can be written in this form.

Applying the Fenchel conjugate VV^* of VV, one has: V(Φ(x))Φ(x),σV(σ)V(\Phi(x)) \leq \langle \Phi(x), \sigma \rangle - V^*(\sigma) for xx and auxiliary variable σ\sigma, suggesting the equivalent augmented function: L(x,σ)=Φ(x),σV(σ).L(x, \sigma) = \langle \Phi(x), \sigma \rangle - V^*(\sigma). The minimization of f(x)f(x) over xx is thus transformed into a minimization over both xx and σ\sigma of L(x,σ)L(x,\sigma).

2. Augmented Problem Structure and Variable Splitting

The introduction of auxiliary variables leads to an augmented optimization problem with at least biconvex structure: L(x,σ)=Φ(x),σV(σ),L(x, \sigma) = \langle \Phi(x), \sigma \rangle - V^*(\sigma), which is convex in xx when σ\sigma is fixed (assuming Φ\Phi is quadratic), and convex in σ\sigma when xx is fixed (due to the concavity of VV^*).

In edge-preserving image regularization, such as denoising or deblurring, the augmented half-quadratic regularization can be explicitly written as: minx,σAxb2+βi=1m[σiGix2V(σi)],\min_{x, \sigma} \|A x - b\|^2 + \beta \sum_{i=1}^m [\sigma_i \|G_i x\|^2 - V^*(\sigma_i)], where AA is the system matrix, bb observed data, GiG_i spatial gradient operators, and σi\sigma_i the auxiliary variables per pixel or edge (Latorre, 7 Oct 2025).

3. Optimization Algorithms and Block Coordinate Descent

Owing to its biconvexity, the augmented half-quadratic regularization problem is well suited to block coordinate descent algorithms:

  • xx-subproblem: Minimize L(x,σ)L(x, \sigma) with respect to xx for fixed σ\sigma. This is quadratic, resulting in efficient solutions via linear solvers or conjugate gradient methods.
  • σ\sigma-subproblem: Minimize L(x,σ)L(x, \sigma) over σ\sigma for fixed xx. This is convex due to properties of the Fenchel conjugate.

Each subproblem can typically be solved efficiently, enabling iterative schemes that alternate between the two blocks. Under suitable conditions, global convergence can be established, and stationary points of the augmented problem correspond one-to-one to those of the original (Latorre, 7 Oct 2025).

4. Applications in Signal and Image Reconstruction

Half-quadratic regularization is extensively used in signal and image processing tasks where edge-preserving regularizers (such as Geman–Reynolds, Huber, or LpL_p-norms) are preferred. Notably, many of these can be written as functions V(t2)V(t^2), where VV is strictly concave.

The augmented form enables:

  • Efficient edge preservation and noise reduction in images
  • Solving otherwise non-convex, non-smooth regularization problems via quadratic subproblems
  • Adaptation to a variety of regularizers, since Table 1 in (Latorre, 7 Oct 2025) explicitly lists most edge-preserving potentials as implicitly concave.

5. Theoretical Equivalence and Practical Implications

A proven result is the equivalence of stationary points and (under second-order conditions) local minima between the original objective and the augmented half-quadratic formulation. In particular, for f(x)=V(Φ(x))f(x) = V(\Phi(x)), if xx^* is a stationary point, then the pair (x,σ)(x^*, \sigma^*), where σ=argminσ[Φ(x),σV(σ)]\sigma^* = \arg\min_\sigma [\langle \Phi(x^*), \sigma \rangle - V^*(\sigma)], is a stationary point of L(x,σ)L(x, \sigma), and vice versa (Latorre, 7 Oct 2025).

This structural result ensures:

  • No spurious solutions are introduced by auxiliary variable splitting
  • The block coordinate (or nonlinear Gauss-Seidel) scheme converges to meaningful solutions of the original problem.

6. Impact on Non-Convex Optimization and Extensions

The half-quadratic approach via implicit concave functions provides both a theoretical and practical framework for a wide class of non-convex signal recovery and image processing problems. Notable features include:

  • Biconvexity of the augmented problem, often yielding globally bounded from below objectives
  • Efficient numerical algorithms, as each subproblem is convex or quadratic
  • Generalization to other structured non-convex problems through proper mapping Φ\Phi and potential VV

A plausible implication is that such reformulations may facilitate scalable solvers in modern large-scale imaging and machine learning tasks where structured, non-convex penalties become unavoidable.

7. Representative Edge-Preserving Regularizers and Augmented Formulations

Many standard edge-preserving regularizers can be formulated as implicit concave functions: | Regularizer ψ(t)\psi(t) | V(y)V(y) Form | Associated Fenchel Conjugate V(σ)V^*(\sigma) | |-------------------------------|---------------------------|----------------------------------------------| | Huber, Geman–Reynolds, LpL_p | V(y)=yp/2V(y) = -y^{p/2}, etc. | See Table 1 in (Latorre, 7 Oct 2025) |

These formulations validate that their augmented problems are biconvex and bounded from below.


In summary, half-quadratic regularization—grounded in implicit concave functions and Fenchel conjugate theory—transforms challenging non-convex signal reconstruction problems into augmented forms that admit efficient, theoretically sound block coordinate descent algorithms. This framework has demonstrable utility for a broad class of edge-preserving regularization tasks and is foundational for ongoing developments in structured non-convex optimization (Latorre, 7 Oct 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Half-Quadratic Regularization.