Papers
Topics
Authors
Recent
2000 character limit reached

Onsager Correction in GOAMP

Updated 11 December 2025
  • Onsager correction in GOAMP is designed to enforce asymptotic Gaussianity and error decoupling, ensuring that each iteration’s error becomes uncorrelated and predictable.
  • The framework incorporates explicit Onsager terms derived from normalized Jacobian traces, which cancel out inherent error dependencies during iterative updates.
  • Empirical results demonstrate that GOAMP outperforms conventional AMP, achieving near-optimal MSE under challenging conditions such as ill-conditioned and structured sensing matrices.

The Onsager correction in Generalized Orthogonal Approximate Message Passing (GOAMP) constitutes a central component enabling stable, high-dimensional sparse inference from generalized linear measurements, particularly under challenging conditions such as sublinear sparsity and structured, ill-conditioned sensing matrices. GOAMP generalizes AMP and orthogonal AMP (OAMP) to cases where the number of nonzero components grows sublinearly with signal dimension, and achieves asymptotic error decoupling via explicit Onsager terms designed using state-evolution theory for orthogonally invariant matrices (Takeuchi, 3 Dec 2025).

1. Core Principle of Onsager Correction in GOAMP

The Onsager correction in GOAMP is constructed to enforce asymptotic Gaussianity and decorrelation of estimation errors across iterations. In AMP and related algorithms, the iteration involves passing estimates between denoiser and linear modules. Direct iteration often induces correlations between the current error and prior residuals, invalidating state-evolution predictions and causing algorithmic breakdown for non-i.i.d. Gaussian sensing matrices. The Onsager correction explicitly subtracts a weighted prior message, with the weight (the Onsager coefficient) given by the average divergence (Jacobian trace) of the denoiser or linear module’s output with respect to its input. This correction ensures that, in the large-system limit with orthogonally invariant sensing matrices, estimation errors become asymptotically white and Gaussian, making them analyzable and predictable via scalar recursion (Takeuchi, 3 Dec 2025).

2. Structure of GOAMP Iterations and Explicit Onsager Terms

GOAMP is organized into four iterative modules—outer and inner denoising modules corresponding to signal and measurement domains, each followed by an Onsager correction. Let tt index iterations; A,BA,B index the denoising modules. The updates proceed as follows:

  • Outer-A (measurement domain, linear iteration):
    • LMMSE update: zAt=fAz(zBAt,AxBAt;vBAz,t,vBAx,t)z_A^t = f_A^z(z_{B\to A}^t, Ax_{B\to A}^t; v_{B\to A}^{z,t}, v_{B\to A}^{x,t})
    • Onsager correction:

    zABt=zAtξA,tzzBAt1ξA,tz,ξA,tz=1MdivzBAtzAtz_{A\to B}^t = \frac{z_A^t - \xi_{A,t}^z z_{B\to A}^t}{1 - \xi_{A,t}^z}, \qquad \xi_{A,t}^z = \frac{1}{M} \mathrm{div}_{z_{B\to A}^t} z_A^t

  • Inner-A (signal domain, prior iteration):

    • LMMSE update: xAt=fAx(xBAt,zBAt;vBAx,t,vBAz,t)x_A^t = f_A^x(x_{B\to A}^t, z_{B\to A}^t; v_{B\to A}^{x,t}, v_{B\to A}^{z,t})
    • Onsager correction:

    xABt=xAtξA,txxBAt1ξA,tx,ξA,tx=1NdivxBAtxAtx_{A\to B}^t = \frac{x_A^t - \xi_{A,t}^x x_{B\to A}^t}{1 - \xi_{A,t}^x}, \qquad \xi_{A,t}^x = \frac{1}{N}\mathrm{div}_{x_{B\to A}^t} x_A^t

  • Outer-B (measurement nonlinearity domain):

    • Scalar denoiser: zBt+1=fBz(zABt,y;vABz,t)z_B^{t+1} = f_B^z(z_{A\to B}^t, y; v_{A\to B}^{z,t})
    • Onsager correction:

    zBAt+1=zBt+1ξB,tzzABtηB,tz1ξB,tzηB,tz_{B\to A}^{t+1} = \frac{z_B^{t+1} - \xi_{B,t}^z z_{A\to B}^t - \eta_{B,t} z}{1 - \xi_{B,t}^z - \eta_{B,t}}

    ξB,tz=1MdivzABtzBt+1,ηB,t=1ξB,tz1MdivzfBz(zABt,g(z,w);vABz,t)\xi_{B,t}^z = \frac{1}{M} \mathrm{div}_{z_{A\to B}^t} z_B^{t+1}, \qquad \eta_{B,t} = 1 - \xi_{B,t}^z - \frac{1}{M}\mathrm{div}_{z} f_B^z(z_{A\to B}^t, g(z,w); v_{A\to B}^{z,t})

  • Inner-B (signal prior denoiser):

    • Scalar denoiser: xBt+1=fBx(xABt;vABx,t)x_B^{t+1} = f_B^x(x_{A\to B}^t; v_{A\to B}^{x,t})
    • Onsager correction:

    xBAt+1=xBt+1(M/N)ξB,txxABt1(M/N)ξB,tx,ξB,tx=1Mn=1N0[fBx]n(xABt;vABx,t)x_{B\to A}^{t+1} = \frac{x_B^{t+1} - (M/N)\xi_{B,t}^x x_{A\to B}^t}{1 - (M/N)\xi_{B,t}^x}, \qquad \xi_{B,t}^x = \frac{1}{M} \sum_{n=1}^N \partial_0 [f_B^x]_n(x_{A\to B}^t; v_{A\to B}^{x,t})

Each Onsager term is derived as the normalized divergence (Jacobian trace) of the corresponding module, correcting for first-order correlations and restoring error "decoupling."

3. State Evolution and Theoretical Guarantees

GOAMP’s Onsager correction is tightly connected to its rigorous state-evolution (SE), predicting mean square errors per iteration in the limit NN\to\infty with k=O(Nγ)k = O(N^\gamma), γ(0,1)\gamma \in (0,1), and M/(klog(N/k))δ>0M / (k \log(N/k)) \to \delta > 0.

The SE recursion involves the following quantities:

  • Estimation MSEs:

    • vˉBAx,t=E[xBAtx2]/N\bar v_{B \to A}^{x,t} = \mathbb{E}[\|x_{B \to A}^t - x\|^2]/N
    • vˉBAz,t=E[zBAtz2]/M\bar v_{B \to A}^{z,t} = \mathbb{E}[\|z_{B \to A}^t - z\|^2]/M
  • Onsager coefficients are given as averages of the relevant Jacobian traces, appearing as the normalized expectations (over the inputs) of module output derivatives.
  • In the Bayes-optimal setting with linear measurements and constant-magnitude nonzeros, the threshold for error-free recovery is

δ=limNMklog(N/k)>δ,δ=2(E[c2Λσ2+c2Λ])1\delta = \lim_{N \to \infty} \frac{M}{k \log(N/k)} > \delta_*, \qquad \delta_* = 2\left( \mathbb{E} \left[ \frac{c^2 \Lambda}{\sigma^2 + c^2 \Lambda} \right] \right)^{-1}

where cc is the nonzero magnitude and Λ\Lambda is the empirical singular value distribution of AA (Takeuchi, 3 Dec 2025).

A key result is that GOAMP matches the information-theoretic minimum measurement threshold for perfect reconstruction as standard AMP with i.i.d. Gaussian AA, while maintaining robust convergence under matrix ensembles and conditioning far beyond that regime.

4. Asymptotic Error Decorrelating Mechanism

The Onsager correction enforces that each module’s output error is, in law, asymptotically independent of all previous module outputs, up to Gaussian fluctuations whose variance is accurately tracked by SE. This property is known as error decoupling. Explicitly, the subtraction of the Onsager term cancels leading-order dependencies in the residuals, resulting in the input to each denoiser being an additive white Gaussian noise (AWGN)-corrupted version of the true signal. This guarantees:

  • Predictability and analyzability of algorithmic dynamics.
  • Correctness of empirical MSE curves as predicted by scalar SE recursion, even when AA is non-i.i.d. or ill-conditioned.
  • Performance equivalence with optimal Bayesian procedures at the MMSE threshold in large dimensions.

5. Empirical Performance and Stability

Numerical simulations demonstrate that GOAMP with Onsager correction:

  • Maintains near-optimal MSE up to very large condition numbers of AA (e.g., κ100\kappa\approx 100 for N=216N=2^{16}, k=16k=16, M=200M=200, SNR = 40 dB).
  • SE accurately predicts breakdown points in κ\kappa.
  • Outperforms conventional AMP, GLasso, and even algorithms specifically tailored for non-i.i.d. or quantized measurements, both in linear and 1-bit compressed sensing regimes.
  • Guarantees that SE curves, adjusted by Onsager terms, force algorithmic recursion (yt+1=ϕ(xt),xt=ψ(yt)y_{t+1} = \phi(x_t), x_t = \psi(y_t)) to admit (0,0) as the unique fixed point when δ>δ\delta > \delta_*—ensuring convergence to the true signal (Takeuchi, 3 Dec 2025).

6. Comparison with AMP and Onsager-Corrected Neural Architectures

GOAMP’s Onsager correction generalizes the mechanism originally introduced for AMP with i.i.d. Gaussian AA. In deep learning contexts, architectures such as LAMP unfold AMP iterations as neural network layers with learnable denoisers and Onsager coefficients, still retaining the key principle of decoupling prediction errors via Onsager reaction (Borgerding et al., 2016). Empirical results confirm that these mechanisms significantly outperform non-OAMP variants in convergence speed, robustness, and error decoupling, especially for ill-conditioned or structured AA.

Method Matrix Classes Onsager Term Type Error Decoupling
AMP i.i.d. Gaussian Scalar bₜ (from 0\ell_0 count) Asymptotic
GOAMP Orthogonally invariant Averaged Jacobian (ξ,η) Asymptotic
LAMP/LISTA Learned (typ. i.i.d.) Learnable bₜ, Jacobian trace Near-i.i.d.

The extension of Onsager-corrected updates to generic denoisers and broadened matrix ensembles is central to the GOAMP approach.

7. Significance in Sparse Estimation Literature

The systematic design and deployment of Onsager corrections distinguish GOAMP as a theoretically principled and practically robust framework for high-dimensional, sublinear-sparsity inference under broad measurement models. The rigorous alignment of algorithmic iteration with state-evolution scalars—mediated by the Onsager correction—guarantees both algorithmic stability and sharp phase transitions matching information-theoretic limits (Takeuchi, 3 Dec 2025). As a result, GOAMP and its Onsager-corrected variants provide a unified and analyzable basis for modern high-dimensional signal reconstruction with strong robustness to structural deviations in sensing operators.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Onsager Correction in GOAMP.