Conjecture on lower-bound inequality under uninformative model priors

Determine whether accumulating evidence under uninformative priors over the model variable m, prior to updating the posterior over models Q(m), guarantees that the variational free energy used in the post-hoc Bayesian model average scheme (with Q(m) = P(m)) remains a lower bound on the variational free energy that explicitly includes inference over models (with Q(m) ≠ P(m)); equivalently, establish conditions under which the Jensen-inequality-based bound F_posthoc ≤ F_models holds in the Dirichlet-parameterized active inference framework described in the paper.

Background

The paper distinguishes between two approaches: (i) a post-hoc scheme that performs inference and learning under a Bayesian model average prior over models (i.e., using Q(m) = P(m)) and (ii) continual structure learning that explicitly infers a posterior over models Q(m). In the Appendix, the authors relate the corresponding variational free energies and indicate that, under Q(m) = P(m), the post-hoc variational free energy acts as a lower bound to the free energy used when inferring Q(m).

They attribute this relationship to Jensen’s inequality via the concavity of the digamma function under the Dirichlet parameterization of likelihood parameters. However, they explicitly conjecture—without proof—that accumulating evidence under uninformative model priors before updating posterior beliefs over models ensures this lower-bound inequality holds. Establishing this result would clarify when post-hoc Bayesian model reduction provides a valid lower-bound approximation to full model-inference free energy during active inference.

References

One might conjecture that accumulating evidence under uninformative model priors—before updating posterior beliefs about hypotheses—ensures the above inequality holds.

Active inference and artificial reasoning  (2512.21129 - Friston et al., 24 Dec 2025) in Appendix, final paragraph