Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparsity-promoting and edge-preserving maximum a posteriori estimators in non-parametric Bayesian inverse problems (1705.03286v2)

Published 9 May 2017 in math.ST and stat.TH

Abstract: We consider the inverse problem of recovering an unknown functional parameter $u$ in a separable Banach space, from a noisy observation $y$ of its image through a known possibly non-linear ill-posed map ${\mathcal G}$. The data $y$ is finite-dimensional and the noise is Gaussian. We adopt a Bayesian approach to the problem and consider Besov space priors (see Lassas et al. 2009), which are well-known for their edge-preserving and sparsity-promoting properties and have recently attracted wide attention especially in the medical imaging community. Our key result is to show that in this non-parametric setup the maximum a posteriori (MAP) estimates are characterized by the minimizers of a generalized Onsager--Machlup functional of the posterior. This is done independently for the so-called weak and strong MAP estimates, which as we show coincide in our context. In addition, we prove a form of weak consistency for the MAP estimators in the infinitely informative data limit. Our results are remarkable for two reasons: first, the prior distribution is non-Gaussian and does not meet the smoothness conditions required in previous research on non-parametric MAP estimates. Second, the result analytically justifies existing uses of the MAP estimate in finite but high dimensional discretizations of Bayesian inverse problems with the considered Besov priors.

Summary

We haven't generated a summary for this paper yet.