Papers
Topics
Authors
Recent
2000 character limit reached

Finite-Budget Bounds on Information Gain

Updated 20 November 2025
  • The paper establishes rigorous limits on mutual information and entropy reduction by quantifying resource constraints in quantum, Bayesian, and thermodynamic contexts.
  • It demonstrates the role of submodular functions and greedy algorithms in achieving near-optimal sensor placement and experimental design under finite budgets.
  • The study highlights trade-offs between work, memory, and measurement precision, guiding optimized strategies for information acquisition in constrained settings.

Finite-budget bounds on information gain formalize the maximal achievable mutual information, entropy reduction, or statistical learning about a system or environment, when subject to explicit constraints on resources such as number of measurement rounds, sample size, physical work, memory, or model complexity. Such bounds have been established across quantum measurement theory, Bayesian optimal experimental design, bandit and reinforcement learning, scientific thermodynamics, and statistical learning theory. Results express performance ceilings, approximation guarantees, and irreversibility penalties that fundamentally govern information acquisition under resource constraints.

1. Fundamental Concepts and Definitions

Information gain quantifies the reduction in uncertainty (typically via Shannon or von Neumann entropy reduction) or, equivalently, the mutual information between measurement outcomes and latent variables. In Bayesian and design-of-experiment contexts, it is tightly linked to the expected Kullback–Leibler (KL) divergence from prior to posterior. In quantum and statistical physics, it aligns with entropy exchange and coherent information. Finite-budget bounds refer to explicit inequalities on the information gain II or cumulative information gain I1:TI_{1:T} as a function of available resources BB (where BB is experiment count, energy, time, sensors, or samples). In submodular and statistical settings, I(B)I(B) is often a monotone, but sublinear, function of resource allocation.

2. Finite-Budget Information Gain in Quantum Measurement

In quantum measurement processes, the information gain ImI_m—expressed as the Holevo χ\chi quantity (the difference in von Neumann entropy before and after measurement)—is universally bounded by the initial quantum coherence of the system and, when including environmental decoherence, by entropy exchange (Sharma et al., 2019). The main bounds are:

  • Coherence-budget bound:

ImCR(ρS)I_m \leq C_R(\rho_S)

where CR(ρS)C_R(\rho_S) is the relative-entropy of coherence of the initial state in the measurement basis.

  • Entropy-exchange bound:

ImSeI_m \leq S_e

where SeS_e is the entropy exchange between system and apparatus plus environment.

These results characterize coherence and entropy exchange as hard budgets for information extraction: maximally coherent (pure) systems and fully entangling measurements can, in principle, saturate these upper limits. Any residual apparatus coherence or mixedness, or environmental decoherence, strictly reduces the attainable ImI_m. The operational implication is that quantum measurement’s information yield is fundamentally capped by the system’s distinguishability, and any non-unit purity or robustness of apparatus “eats into” this cap.

3. Submodular Structure and Greedy Finite-Budget Guarantees in Bayesian Design

In Bayesian linear-Gaussian inverse problems, the expected information gain (EIG) for a set of kk sensors or experiments is a monotone submodular function of that finite resource set (Maio et al., 7 May 2025). The canonical form for EIG is:

I(S)=logdet(I+iSuiui)I(S) = \log \det \left(I + \sum_{i\in S} u_i \otimes u_i \right)

where SS is the set of chosen sensors, and uiu_i encodes the noise-normalized design vector.

The finite-budget guarantee for the greedy sensor placement algorithm holds:

I(Sgreedy)(11/e)I(S)I(S_{\text{greedy}}) \geq (1-1/e) I(S^*)

where SS^* is the optimal kk-element set. This result, stemming from classical submodular maximization theory, ensures that with O(kV)O(k|V|) evaluations (far superior to brute-force enumeration), one attains provable approximation of the optimal finite-budget information gain for large-scale (e.g., PDE-constrained) Bayesian experimental design problems.

4. Thermodynamic and Physical Resource Constraints

In the thermodynamics of information acquisition, information gain is limited by the dissipation and work available for “measurement–update–erase” cycles (Rao, 19 Nov 2025, Nagase et al., 2023). The generalized finite-work bound for TT rounds, at inverse temperature β=(kBT)1\beta=(k_BT)^{-1} and work budget WtotW_{\mathrm{tot}}, is:

I1:Tmin{H(Θ0),βWtott=1TH(Yt)}I_{1:T} \leq \min \left\{ H(\Theta_0),\, \beta W_{\mathrm{tot}} - \sum_{t=1}^T H(Y_t) \right\}

where H(Θ0)H(\Theta_0) is the initial prior entropy and H(Yt)\sum H(Y_t) is the outcome (“erasure”) entropy overhead.

A refined speed–cost–information tradeoff appears in continuous-time Markov measurement (Nagase et al., 2023):

ΣτYI+WpX(I)2τmτ\Sigma_\tau^Y \geq I + \frac{W_{p^X}(I)^2}{\tau \langle m \rangle_\tau}

where ΣτY\Sigma_\tau^Y is total memory dissipation, II is the acquired mutual information (in time τ\tau), and WpX(I)W_{p^X}(I) is the minimal “transport distance” required for the target II given source distribution pXp^X. These results formalize the minimum thermodynamic cost and irreversibility penalty associated with finite-budget acquisition of information in measurement and scientific automation.

5. Information Gain Bounds in Bayesian Optimization and Reinforcement Learning

The finite-budget ceiling on information gain is essential in both Gaussian process (GP) bandits and Bayesian RL. In GP-UCB/GP-TS bandits, the maximal information gain over TT queries is defined by the kernel’s spectral decay (Vakili et al., 2020, Flynn, 5 Oct 2025):

  • General upper bound:

γT12Dlog(1+kˉTτD)+12δDTτ\gamma_T \leq \frac{1}{2} D \log \left(1 + \frac{\bar k T}{\tau D}\right) + \frac{1}{2} \frac{\delta_D T}{\tau}

where DD is the chosen truncation rank and δD\delta_D is the tail mass for the kernel eigenvalues.

  • For Matérn and squared-exponential kernels:

γT(Mateˊrn)=O(Td/(2ν+d)log2ν/(2ν+d)T),γT(SE)=O(logd+1T)\gamma_T(\text{Matérn}) = O\left(T^{d/(2\nu+d)}\log^{2\nu/(2\nu+d)} T \right),\quad \gamma_T(\text{SE}) = O(\log^{d+1} T)

Under these bounds, the cumulative regret RTR_T for GP-UCB satisfies:

RT=O~(TγTlogT)R_T = \tilde O \left( \sqrt{T\gamma_T \log T} \right)

This matching of upper and lower information gain bounds up to logarithmic factors establishes the precise dependence of learning efficiency on the finite-query budget and kernel properties.

In information-directed reinforcement learning (IDS), cumulative information gain about the unknown MDP is bounded as:

I(E;DL+1)O(S2AHlog(ST))\mathbb{I}(E ; \mathcal{D}_{L+1}) \leq O(S^2 A H \log(S T))

for SS states, AA actions, and TT steps. Regret is then bounded in terms of budget and information gain:

BRLE[Γ]I(E;DL+1)L\mathrm{BR}_L \leq \sqrt{\mathbb{E}[\Gamma^*] \cdot \mathbb{I}(E ; \mathcal{D}_{L+1}) \cdot L}

where Γ\Gamma^* is a worst-case information ratio (Hao et al., 2022).

6. Finite-Budget Estimation and Dimension Reduction in Bayesian Experimental Design

For finite sample budgets LL, two-stage transport or density-approximation EIG estimators outperform classical nested Monte Carlo in mean-squared error (MSE) rate (Li et al., 13 Nov 2024). With optimal sample allocation, the two-stage estimator achieves:

MSE=O(L1)\mathrm{MSE} = O(L^{-1})

compared to O(L2/3)O(L^{-2/3}) for nested Monte Carlo. In high-dimensional parameter spaces, gradient-based upper bounds for projected mutual information provide explicit control over the loss from dimension truncation. For a chosen error tolerance ϵ\epsilon, one selects projection ranks r,sr,s so that residual information loss is less than ϵ\epsilon, enabling computation of near-optimal EIG under the finite-sample constraint.

7. Approximate Reversibility and Operational Consequences

Finite-budget bounds on information gain also govern approximate reversibility of quantum measurements and communication protocols (Buscemi et al., 2016). The key remainder theorem states that small information gain δ\delta implies the measurement channel is approximately reversible via a recovery operation, with average fidelity bounded by Favg1g(δ)F_{\text{avg}} \geq 1 - g(\delta), where g(δ)=12δ(ln2)δg(\delta) = 1-2^{-\delta} \approx (\ln 2)\delta. Operationally, this allows simulation of measurement channels at vanishing classical cost as the gain (budget) decreases, establishing direct resource-accuracy trade-offs in measurement compression and quantum information tasks.


In synthesis, finite-budget bounds on information gain emerge as a universal constraint—rooted in entropy, channel capacity, thermodynamics, sample complexity, and combinatorial submodularity—that structures achievable learning, measurement, and optimization outcomes across disciplines. These bounds dictate precise operational trade-offs and yield algorithmic strategies (e.g., greedy selection, optimal transport protocols, federated partitioning) to saturate or approximate these limits under explicit resource constraints.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Finite-Budget Bounds on Information Gain.