Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bayesian Predictive Coding (2503.24016v1)

Published 31 Mar 2025 in cs.LG, cs.AI, and stat.ML

Abstract: Predictive coding (PC) is an influential theory of information processing in the brain, providing a biologically plausible alternative to backpropagation. It is motivated in terms of Bayesian inference, as hidden states and parameters are optimised via gradient descent on variational free energy. However, implementations of PC rely on maximum \textit{a posteriori} (MAP) estimates of hidden states and maximum likelihood (ML) estimates of parameters, limiting their ability to quantify epistemic uncertainty. In this work, we investigate a Bayesian extension to PC that estimates a posterior distribution over network parameters. This approach, termed Bayesian Predictive coding (BPC), preserves the locality of PC and results in closed-form Hebbian weight updates. Compared to PC, our BPC algorithm converges in fewer epochs in the full-batch setting and remains competitive in the mini-batch setting. Additionally, we demonstrate that BPC offers uncertainty quantification comparable to existing methods in Bayesian deep learning, while also improving convergence properties. Together, these results suggest that BPC provides a biologically plausible method for Bayesian learning in the brain, as well as an attractive approach to uncertainty quantification in deep learning.

Summary

  • The paper introduces Bayesian Predictive Coding (BPC), extending traditional predictive coding by integrating Bayesian inference to estimate posterior distributions over network parameters and quantify uncertainty.
  • BPC leverages conjugate priors and matrix normal-Wishart distributions to derive closed-form Hebbian update rules, demonstrating faster convergence compared to standard methods on empirical datasets.
  • Empirical evaluations show BPC achieves competitive performance and significantly improves uncertainty quantification on tasks like regression and classification, offering theoretical implications for biological plausibility and practical benefits for model reliability.

Bayesian Predictive Coding: A Comprehensive Overview

The paper "Bayesian Predictive Coding" extends the conventional predictive coding (PC) framework by integrating Bayesian inference to estimate posterior distributions over network parameters. The authors term this innovation Bayesian Predictive Coding (BPC). They address the limitations inherent in traditional PC implementations that rely on maximum a posteriori (MAP) estimates for hidden states and maximum likelihood (ML) estimates for parameters, which restricts their capacity to quantify epistemic uncertainty. The BPC framework employs a rigorous Bayesian approach, maintaining the locality and simplicity characteristic of predictive coding and yielding closed-form Hebbian update rules.

Methodological Advancements

BPC presents itself as a natural progression of predictive coding principles within a functorial Bayesian structure. By estimating posterior distributions over network parameters, BPC retains compatibility with the hierarchical Gaussian generative models of PC frameworks but transcends the deterministic MAP/ML estimations typically enforced. This transition towards a Bayesian paradigm facilitates comprehensive uncertainty quantification, encompassing both epistemic and aleatoric uncertainties.

The crux of the BPC algorithm lies in its capacity to leverage conjugate priors within a matrix normal-Wishart distributional assumption. This approach results in closed-form updates for parameter distributions, which is a marked improvement over the gradient-based methods necessitated by backpropagation. The algorithm demonstrates superior convergence in fewer epochs under full-batch training conditions, as evidenced by its comprehensive evaluation on empirical datasets including the UCI energy dataset and the MNIST database.

Empirical Evaluations

The paper delineates a rigorous experimental framework to verify the efficacy of BPC against standard PC and backpropagation (BP) methods. On datasets such as Two Moons and MNIST, BPC was found to converge rapidly and attained competitive performance when compared to the other methods. In particular, BPC achieved similar accuracy in fewer epochs during full-batch training, illustrating its efficiency and potential for significant computational savings.

Another significant aspect of BPC highlighted in the paper is its qualitative improvements in uncertainty quantification. On synthetic regression tasks, BPC effectively demonstrated its potential to quantify both aleatoric and epistemic uncertainties — aspects crucial for the reliability and interpretability of deep learning models. BPC has been juxtaposed with Bayes by Backprop (BBB), illustrating its superior capacity for precise uncertainty quantification across various UCI regression datasets.

Theoretical and Practical Implications

The introduction of BPC offers several theoretical and practical ramifications. Firstly, it presents a plausible computational model approximating the brain's learning mechanisms, potentially bridging the gap between biologically plausible models and effective machine learning algorithms. Secondly, the enhancements in uncertainty quantification enable BPC models to provide well-calibrated confidence estimates, a critical requirement for applications demanding high reliability and interpretability.

Despite its advantages, BPC entails increased computational complexity due to the use of matrix normal-Wishart distributions. Future research must consider efficient structurally sparse approximations to mitigate computational overhead in large-scale settings. Moreover, the approach opens avenues for integrating pre-trained models using backpropagation with BPC updates, potentially amalgamating the merits of both paradigms.

Conclusion

Bayesian Predictive Coding introduces a substantial enhancement to predictive coding frameworks through its Bayesian refinement and promising empirical results. The algorithm not only preserves the biological plausibility of predictive coding but also offers profound improvements in convergence properties and uncertainty quantification. This advancement suggests a paradigm shift towards more robust and interpretable Bayesian models in neural computation, posing intriguing possibilities for future research and application in artificial intelligence.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com