Score-Based Turbo Message Passing (STMP)
- STMP is a Bayesian iterative inference method that integrates turbo message passing with deep score-based MMSE denoising to solve severely ill-posed linear inverse problems.
- It alternates between linear MMSE estimation and score-based denoising using deep generative models, enabling near-optimal recovery in compressive imaging and wireless joint activity detection.
- Empirical results demonstrate that STMP outperforms traditional methods in speed, convergence, and recovery quality, even with low sampling rates and severe quantization.
Score-Based Turbo Message Passing (STMP) is a Bayesian iterative inference methodology that combines turbo-style message passing with deep score-based minimum mean-squared error (MMSE) denoising. It is designed to achieve near-Bayesian-optimal recovery for severely ill-posed linear inverse problems, with major applications in compressive image recovery and joint activity detection/channel estimation in massive wireless connectivity. STMP replaces classical hand-crafted or non-learned denoisers with powerful deep generative models capable of learning highly expressive score functions, enabling rapid and accurate recovery even at very low sampling rates or in the presence of severe measurement quantization (Cai et al., 28 Mar 2025, Cai et al., 31 May 2025, Cai et al., 16 Dec 2025).
1. Core Principles and Problem Formulation
STMP addresses the standard linear observation model: where (for imaging) or higher-dimensional (e.g., channel matrix in wireless), is the known measurement operator ( or -valued in wireless), and is observed. The goal is to estimate the posterior mean (the MMSE solution) or sample from the posterior . When , the problem is underdetermined; introducing strong learned priors is critical for success.
STMP employs a "turbo" factorization of the posterior, alternating between:
- Module A: Linear MMSE estimation, incorporating the likelihood and a Gaussian approximation of incoming beliefs.
- Module B: Score-based MMSE denoising, plugging in a learned score-network as the empirical Bayes denoiser using Tweedie’s formula.
This alternation, together with extrinsic message updates, results in a rapid and robust iterative scheme (Cai et al., 28 Mar 2025, Cai et al., 16 Dec 2025).
2. Algorithmic Structure and Updates
At each iteration , STMP maintains for Modules A and B a prior mean/variance and computes posterior means/variances based on the following update schedule:
- Module A:
- Prior: ,
- Posterior computation:
- Extrinsic outputs to Module B:
- Module B (Score-based MMSE Denoiser):
- Prior: ,
- MMSE denoising via Tweedie's formula:
where denotes a trained first-order score network and a trained trace-diagonalized second-order score network. - Extrinsic outputs to Module A:
- Prepare next iteration: .
For quantized measurements, an additional Module C performs component-wise MMSE dequantization, inserting extrinsic pseudo-measurements into the turbo cycle (Cai et al., 16 Dec 2025).
3. Score-based Priors and MMSE Denoising
The crucial innovation in STMP is the replacement of hand-designed denoisers with deep generative models trained via denoising score matching. Given a noisy observation , , Tweedie’s formula yields: This expectation is operationalized in STMP by the learned score-net , resulting in
Posterior variance is estimated by training a second-order score network to approximate the trace of the Hessian ,
This structure connects STMP with empirical Bayes methodology, ensuring that the denoising step is statistically consistent with the true (though intractable) data posterior (Cai et al., 28 Mar 2025, Cai et al., 16 Dec 2025, Cai et al., 31 May 2025).
4. State Evolution and Theoretical Guarantees
STMP admits rigorous asymptotic analysis in the large system limit . The NMSE and effective noise statistics at each module obey a scalar state evolution (SE), recursively tracking the evolution of estimation error:
- For compressive imaging (Cai et al., 16 Dec 2025):
where , .
- For wireless JADCE (Cai et al., 31 May 2025), similar SE equations propagate block-wise through the matrix-structured inference task.
State evolution precisely predicts the fixed-point and iterative behavior of STMP, allowing performance tuning and principled analysis. In the Bayes-optimal regime, the SE fixed-point matches results from the replica method.
5. Extensions: Quantized STMP (Q-STMP) and Wireless Connectivity
Q-STMP generalizes STMP to quantized measurement channels, including severe cases such as 1-bit quantization:
- Module C computes componentwise MMSE estimates from quantization bins using closed-form truncated Gaussian expectations.
- The extrinsic pseudo-measurement is re-injected into the turbo cycle, and the scalar state evolution incorporates the nonlinearity of quantization via analytically evaluated transfer functions, .
In wireless joint activity detection and channel estimation (Cai et al., 31 May 2025), the STMP framework is extended to handle super-nodes representing entire channel matrices, with score-based denoising operating on block-structured priors. Activity detection leverages both the MMSE denoised channel beliefs and explicit computation of device activity posteriors.
6. Empirical Performance and Computational Properties
Extensive experiments validate STMP’s advantages:
- Compressive Imaging (FFHQ, ): STMP outperforms conventional message-passing, plug-and-play ADMM, score-based posterior sampling, and prior turbo-inference methods on PSNR, SSIM, FID, and LPIPS across a range of subsampling ratios and noise intensities. At and quantization to one bit, Q-STMP achieves 27.4 dB PSNR compared to 18.7 dB (GTurbo-SR) and 12.2 dB (QCS-SGM).
- Efficiency: Empirically converges within 8–10 iterations (imaging) or 10–20 iterations (wireless JADCE), requiring just 2 score-network NFEs per iteration, versus hundreds or thousands for diffusion samplers (Cai et al., 16 Dec 2025, Cai et al., 28 Mar 2025).
- Wireless JADCE: On massive MIMO/OFDM settings at SNR = 10 dB, Q-STMP achieves NMSE dB with a detection error of for devices, quadrupling the supported access capacity at a fixed error compared to the leading EM-based turbo frameworks (Cai et al., 31 May 2025).
| Task | STMP Iterations to Converge | Key Performance Gain |
|---|---|---|
| Compressive imaging (clean/quantized) | 8–10 | Highest PSNR/SSIM; best FID/LPIPS |
| Wireless JADCE | 10–20 | %%%%3031%%%% device capacity |
STMP and Q-STMP maintain fast convergence and robustness across broad regimes of operator structure and channel/model uncertainty.
7. Significance, Limitations, and Outlook
STMP establishes a bridge between plug-and-play message-passing and the full flexibility of state-of-the-art deep generative modeling, introduces high sample efficiency via empirical Bayes denoising, and provides rigorous SE-based predictability. It is especially effective in regimes where traditional PnP methods break down due to limited expressive capacity of classic denoisers.
Notable limitations include the reliance on high-quality universal score models and potential numerical instability at extreme undersampling, which can be ameliorated by message-damping strategies.
A plausible implication is that STMP’s architecture is broadly extensible to hybrid nonlinear/quantized/sparse inference tasks beyond those covered in current work, wherever closed-form posterior updates are impractical but MMSE/Tweedie-based denoising is tractable and robust (Cai et al., 28 Mar 2025, Cai et al., 31 May 2025, Cai et al., 16 Dec 2025).