Convergence analysis for multi-level noise scheduling in Diffusive Gibbs Sampling (DiGS)

Establish convergence guarantees for Diffusive Gibbs Sampling (DiGS) when employing the multi-level variance-preserving noise scheduling that alternates sampling between the Gaussian convolution p_t(tilde{x}_t|x) = N(tilde{x}_t | alpha_t x, (1 - alpha_t^2) I) and the denoising posterior p(x | tilde{x}_t) across multiple noise levels t = T, ..., 1. Specifically, determine conditions under which the resulting Markov chain is ergodic and converges to the target distribution p(x) in this multi-level setting.

Background

Diffusive Gibbs Sampling (DiGS) constructs a Gibbs sampler over the joint p(x, tilde{x}) by alternating between the Gaussian convolution step p(tilde{x}|x) and the denoising posterior step p(x|tilde{x}), enabling improved mixing for multi-modal targets. For a single noise level, the paper provides theoretical properties such as irreducibility and recurrence for the induced Markov chain.

To mitigate sensitivity to convolution hyperparameters and improve practical performance, the paper introduces a multi-level variance-preserving (VP) noise schedule with parameters 0 < alpha_T < ... < alpha_1 < 1, running DiGS sequentially from level T to 1. While this multi-level scheme is empirically effective, the authors explicitly leave the convergence analysis of DiGS under this multi-level schedule for future work, indicating that formal guarantees for ergodicity and convergence to p(x) in the multi-level setting are not yet established.

References

We also leave the convergence analysis of DiGS in the multi-level noise scheduling setting as a future work; see Appendix~\ref{appendix:convergence} for some discussions.

Diffusive Gibbs Sampling  (2402.03008 - Chen et al., 2024) in Conclusion, Limitations and future work