Dice Question Streamline Icon: https://streamlinehq.com

Rates of convergence for GVI posteriors under unbounded divergences

Establish rates of convergence for Generalised Variational Inference posterior measures when the divergence D(Q:Π) used in the objective T_n(Q) = n·J_{L_n}(Q) + (1/β)·D(Q:Π) is unbounded, specifying conditions under which these posteriors concentrate and quantifying the convergence behavior on arbitrary Polish hypothesis spaces Θ and for posterior families 𝒬 ⊂ P(Θ).

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper develops existence, uniqueness, asymptotic consistency, and explicit rates of convergence for Generalised Variational Inference (GVI) posteriors when the divergence D(Q:Π) is bounded (e.g., total variation). These results include convergence to neighborhoods of loss minimizers and nearly n{-1} rates.

In the Extensions subsection, the authors explore the unbounded divergence setting. They show that existence and uniqueness of GVI posteriors can still be obtained and provide a partial concentration result under additional assumptions. However, they explicitly state that deriving rates of convergence in this unbounded-divergence regime remains unresolved.

References

Throughout, we have assumed that \cref{asp:bdd} holds, that is the divergence between all possible posteriors, $\mathcal{Q}$, and all priors, $\mathcal{G}$, is bounded. This has allowed us to make significant progress in establishing asymptotic consistency and rates of convergence of GVI posterior measures; under unbounded divergences this is an open problem.

Rates of Convergence of Generalised Variational Inference Posteriors under Prior Misspecification (2510.03109 - Mildner et al., 3 Oct 2025) in Section 3, Subsection “Extensions”