Papers
Topics
Authors
Recent
Search
2000 character limit reached

Score-Based Turbo Message Passing (STMP)

Updated 23 December 2025
  • STMP is a Bayesian iterative inference method that integrates turbo message passing with deep score-based MMSE denoising to solve severely ill-posed linear inverse problems.
  • It alternates between linear MMSE estimation and score-based denoising using deep generative models, enabling near-optimal recovery in compressive imaging and wireless joint activity detection.
  • Empirical results demonstrate that STMP outperforms traditional methods in speed, convergence, and recovery quality, even with low sampling rates and severe quantization.

Score-Based Turbo Message Passing (STMP) is a Bayesian iterative inference methodology that combines turbo-style message passing with deep score-based minimum mean-squared error (MMSE) denoising. It is designed to achieve near-Bayesian-optimal recovery for severely ill-posed linear inverse problems, with major applications in compressive image recovery and joint activity detection/channel estimation in massive wireless connectivity. STMP replaces classical hand-crafted or non-learned denoisers with powerful deep generative models capable of learning highly expressive score functions, enabling rapid and accurate recovery even at very low sampling rates or in the presence of severe measurement quantization (Cai et al., 28 Mar 2025, Cai et al., 31 May 2025, Cai et al., 16 Dec 2025).

1. Core Principles and Problem Formulation

STMP addresses the standard linear observation model: y=Ax+n,n∼N(0,δ02IM)y = Ax + n, \qquad n \sim \mathcal{N}(0,\delta_0^2 I_M) where x∈RNx \in \mathbb{R}^N (for imaging) or higher-dimensional xx (e.g., channel matrix in wireless), AA is the known measurement operator (A∈RM×NA \in \mathbb{R}^{M \times N} or C\mathbb{C}-valued in wireless), and y∈RMy \in \mathbb{R}^M is observed. The goal is to estimate the posterior mean (the MMSE solution) or sample from the posterior p(x∣y)p(x|y). When M≪NM \ll N, the problem is underdetermined; introducing strong learned priors is critical for success.

STMP employs a "turbo" factorization of the posterior, alternating between:

  • Module A: Linear MMSE estimation, incorporating the likelihood and a Gaussian approximation of incoming beliefs.
  • Module B: Score-based MMSE denoising, plugging in a learned score-network as the empirical Bayes denoiser using Tweedie’s formula.

This alternation, together with extrinsic message updates, results in a rapid and robust iterative scheme (Cai et al., 28 Mar 2025, Cai et al., 16 Dec 2025).

2. Algorithmic Structure and Updates

At each iteration tt, STMP maintains for Modules A and B a prior mean/variance and computes posterior means/variances based on the following update schedule:

  1. Module A:

    • Prior: x∈RNx \in \mathbb{R}^N0, x∈RNx \in \mathbb{R}^N1
    • Posterior computation:

    x∈RNx \in \mathbb{R}^N2

    x∈RNx \in \mathbb{R}^N3

  • Extrinsic outputs to Module B:

    x∈RNx \in \mathbb{R}^N4

  1. Module B (Score-based MMSE Denoiser):

    • Prior: x∈RNx \in \mathbb{R}^N5, x∈RNx \in \mathbb{R}^N6
    • MMSE denoising via Tweedie's formula:

    x∈RNx \in \mathbb{R}^N7

    x∈RNx \in \mathbb{R}^N8

    where x∈RNx \in \mathbb{R}^N9 denotes a trained first-order score network and xx0 a trained trace-diagonalized second-order score network. - Extrinsic outputs to Module A:

    xx1

  • Prepare next iteration: xx2.

For quantized measurements, an additional Module C performs component-wise MMSE dequantization, inserting extrinsic pseudo-measurements into the turbo cycle (Cai et al., 16 Dec 2025).

3. Score-based Priors and MMSE Denoising

The crucial innovation in STMP is the replacement of hand-designed denoisers with deep generative models trained via denoising score matching. Given a noisy observation xx3, xx4, Tweedie’s formula yields: xx5 This expectation is operationalized in STMP by the learned score-net xx6, resulting in

xx7

Posterior variance is estimated by training a second-order score network xx8 to approximate the trace of the Hessian xx9,

AA0

This structure connects STMP with empirical Bayes methodology, ensuring that the denoising step is statistically consistent with the true (though intractable) data posterior (Cai et al., 28 Mar 2025, Cai et al., 16 Dec 2025, Cai et al., 31 May 2025).

4. State Evolution and Theoretical Guarantees

STMP admits rigorous asymptotic analysis in the large system limit AA1. The NMSE and effective noise statistics at each module obey a scalar state evolution (SE), recursively tracking the evolution of estimation error:

AA2

AA3

where AA4, AA5.

  • For wireless JADCE (Cai et al., 31 May 2025), similar SE equations propagate block-wise through the matrix-structured inference task.

State evolution precisely predicts the fixed-point and iterative behavior of STMP, allowing performance tuning and principled analysis. In the Bayes-optimal regime, the SE fixed-point matches results from the replica method.

5. Extensions: Quantized STMP (Q-STMP) and Wireless Connectivity

Q-STMP generalizes STMP to quantized measurement channels, including severe cases such as 1-bit quantization:

  • Module C computes componentwise MMSE estimates from quantization bins using closed-form truncated Gaussian expectations.
  • The extrinsic pseudo-measurement is re-injected into the turbo cycle, and the scalar state evolution incorporates the nonlinearity of quantization via analytically evaluated transfer functions, AA6.

In wireless joint activity detection and channel estimation (Cai et al., 31 May 2025), the STMP framework is extended to handle super-nodes representing entire channel matrices, with score-based denoising operating on block-structured priors. Activity detection leverages both the MMSE denoised channel beliefs and explicit computation of device activity posteriors.

6. Empirical Performance and Computational Properties

Extensive experiments validate STMP’s advantages:

  • Compressive Imaging (FFHQ, AA7): STMP outperforms conventional message-passing, plug-and-play ADMM, score-based posterior sampling, and prior turbo-inference methods on PSNR, SSIM, FID, and LPIPS across a range of subsampling ratios and noise intensities. At AA8 and quantization to one bit, Q-STMP achieves 27.4 dB PSNR compared to 18.7 dB (GTurbo-SR) and 12.2 dB (QCS-SGM).
  • Efficiency: Empirically converges within 8–10 iterations (imaging) or 10–20 iterations (wireless JADCE), requiring just 2 score-network NFEs per iteration, versus hundreds or thousands for diffusion samplers (Cai et al., 16 Dec 2025, Cai et al., 28 Mar 2025).
  • Wireless JADCE: On massive MIMO/OFDM settings at SNR = 10 dB, Q-STMP achieves NMSE AA9 dB with a detection error of A∈RM×NA \in \mathbb{R}^{M \times N}0 for A∈RM×NA \in \mathbb{R}^{M \times N}1 devices, quadrupling the supported access capacity at a fixed error compared to the leading EM-based turbo frameworks (Cai et al., 31 May 2025).
Task STMP Iterations to Converge Key Performance Gain
Compressive imaging (clean/quantized) 8–10 Highest PSNR/SSIM; best FID/LPIPS
Wireless JADCE 10–20 %%%%30C\mathbb{C}31%%%% device capacity

STMP and Q-STMP maintain fast convergence and robustness across broad regimes of operator structure and channel/model uncertainty.

7. Significance, Limitations, and Outlook

STMP establishes a bridge between plug-and-play message-passing and the full flexibility of state-of-the-art deep generative modeling, introduces high sample efficiency via empirical Bayes denoising, and provides rigorous SE-based predictability. It is especially effective in regimes where traditional PnP methods break down due to limited expressive capacity of classic denoisers.

Notable limitations include the reliance on high-quality universal score models and potential numerical instability at extreme undersampling, which can be ameliorated by message-damping strategies.

A plausible implication is that STMP’s architecture is broadly extensible to hybrid nonlinear/quantized/sparse inference tasks beyond those covered in current work, wherever closed-form posterior updates are impractical but MMSE/Tweedie-based denoising is tractable and robust (Cai et al., 28 Mar 2025, Cai et al., 31 May 2025, Cai et al., 16 Dec 2025).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Score-Based Turbo Message Passing (STMP).