Rates of convergence for GVI posteriors under unbounded divergences
Establish rates of convergence for Generalised Variational Inference posterior measures when the divergence D(Q:Π) used in the objective T_n(Q) = n·J_{L_n}(Q) + (1/β)·D(Q:Π) is unbounded, specifying conditions under which these posteriors concentrate and quantifying the convergence behavior on arbitrary Polish hypothesis spaces Θ and for posterior families 𝒬 ⊂ P(Θ).
References
Throughout, we have assumed that \cref{asp:bdd} holds, that is the divergence between all possible posteriors, $\mathcal{Q}$, and all priors, $\mathcal{G}$, is bounded. This has allowed us to make significant progress in establishing asymptotic consistency and rates of convergence of GVI posterior measures; under unbounded divergences this is an open problem.
— Rates of Convergence of Generalised Variational Inference Posteriors under Prior Misspecification
(2510.03109 - Mildner et al., 3 Oct 2025) in Section 3, Subsection “Extensions”