On submodularity of the expected information gain (2505.04145v2)
Abstract: We consider finite-dimensional linear Gaussian Bayesian inverse problems with uncorrelated sensor measurements. In this setting, it is known that the expected information gain, quantified by the expected Kullback-Leibler divergence from the posterior measure to the prior measure, is submodular. We present a simple alternative proof of this fact tailored to a weighted inner product space setting arising from discretization of infinite-dimensional inverse problems constrained by partial differential equations (PDEs).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.