Consistency of variational inference for Besov priors in non-linear inverse problems (2508.06179v1)
Abstract: This study investigates the variational posterior convergence rates of inverse problems for partial differential equations (PDEs) with parameters in Besov spaces $B_{pp}\alpha$ ($p \geq 1$) which are modeled naturally in a Bayesian manner using Besov priors constructed via random wavelet expansions with $p$-exponentially distributed coefficients. Departing from exact Bayesian inference, variational inference transforms the inference problem into an optimization problem by introducing variational sets. Building on a refined prior mass and testing'' framework, we derive general conditions on PDE operators and guarantee that variational posteriors achieve convergence rates matching those of the exact posterior under widely adopted variational families (Besov-type measures or mean-field families). Moreover, our results achieve minimax-optimal rates over $B^{\alpha}_{pp}$ classes, significantly outperforming the suboptimal rates of Gaussian priors (by a polynomial factor). As specific examples, two typical nonlinear inverse problems, the Darcy flow problems and the inverse potential problem for a subdiffusion equation, are investigated to validate our theory. Besides, we show that our convergence rates of
prediction'' loss for these ``PDE-constrained regression problems'' are minimax optimal.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.