Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient sampling for sparse Bayesian learning using hierarchical prior normalization (2505.23753v1)

Published 29 May 2025 in math.NA and cs.NA

Abstract: We introduce an approach for efficient Markov chain Monte Carlo (MCMC) sampling for challenging high-dimensional distributions in sparse Bayesian learning (SBL). The core innovation involves using hierarchical prior-normalizing transport maps (TMs), which are deterministic couplings that transform the sparsity-promoting SBL prior into a standard normal one. We analytically derive these prior-normalizing TMs by leveraging the product-like form of SBL priors and Knothe--Rosenblatt (KR) rearrangements. These transform the complex target posterior into a simpler reference distribution equipped with a standard normal prior that can be sampled more efficiently. Specifically, one can leverage the standard normal prior by using more efficient, structure-exploiting samplers. Our numerical experiments on various inverse problems -- including signal deblurring, inverting the non-linear inviscid Burgers equation, and recovering an impulse image -- demonstrate significant performance improvements for standard MCMC techniques.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com