Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hessian-Free Laplace in Bayesian Deep Learning (2403.10671v1)

Published 15 Mar 2024 in stat.ML and cs.LG

Abstract: The Laplace approximation (LA) of the Bayesian posterior is a Gaussian distribution centered at the maximum a posteriori estimate. Its appeal in Bayesian deep learning stems from the ability to quantify uncertainty post-hoc (i.e., after standard network parameter optimization), the ease of sampling from the approximate posterior, and the analytic form of model evidence. However, an important computational bottleneck of LA is the necessary step of calculating and inverting the Hessian matrix of the log posterior. The Hessian may be approximated in a variety of ways, with quality varying with a number of factors including the network, dataset, and inference task. In this paper, we propose an alternative framework that sidesteps Hessian calculation and inversion. The Hessian-free Laplace (HFL) approximation uses curvature of both the log posterior and network prediction to estimate its variance. Only two point estimates are needed: the standard maximum a posteriori parameter and the optimal parameter under a loss regularized by the network prediction. We show that, under standard assumptions of LA in Bayesian deep learning, HFL targets the same variance as LA, and can be efficiently amortized in a pre-trained network. Experiments demonstrate comparable performance to that of exact and approximate Hessians, with excellent coverage for in-between uncertainty.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. Weight uncertainty in neural networks. In International Conference on Machine Learning, pp. 1613–1622, 2015.
  2. Laplace redux-effortless Bayesian deep learning. Advances in Neural Information Processing Systems, 34:20089–20103, 2021.
  3. ’In-between’ uncertainty in Bayesian neural networks. arXiv preprint arXiv:1906.11537, 2019.
  4. Gauss-Newton approximation to Bayesian learning. In Proceedings of international conference on neural networks (ICNN’97), volume 3, pp.  1930–1935. IEEE, 1997.
  5. Improving predictions of Bayesian neural nets via local linearization. In International Conference on Artificial Intelligence and Statistics, pp.  703–711. PMLR, 2021.
  6. The implicit delta method. Advances in Neural Information Processing Systems, 35:37471–37483, 2022.
  7. What uncertainties do we need in Bayesian deep learning for computer vision? Advances in Neural Information Processing Systems, 30, 2017.
  8. Approximate inference turns deep networks into Gaussian processes. Advances in neural information processing systems, 32, 2019.
  9. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2017.
  10. Lawrence, N. D. Variational inference in probabilistic models. PhD thesis, 2001.
  11. Leveraging uncertainty information from deep neural networks for disease detection. Scientific Reports, 7(1):17816, 2017.
  12. Optimizing millions of hyperparameters by implicit differentiation. In International Conference on Artificial Intelligence and Statistics, pp.  1540–1552. PMLR, 2020.
  13. MacKay, D. J. Bayesian interpolation. Neural Computation, 4(3):415–447, 1992.
  14. Mackay, D. J. Bayesian methods for adaptive models. Doctoral Thesis, California Institute of Technology, 1992.
  15. Optimizing neural networks with Kronecker-factored approximate curvature. In International conference on machine learning, pp. 2408–2417. PMLR, 2015.
  16. Learning recurrent neural networks with Hessian-free optimization. In Proceedings of the 28th international conference on machine learning (ICML-11), pp.  1033–1040, 2011.
  17. Pearlmutter, B. A. Fast exact multiplication by the Hessian. Neural computation, 6(1):147–160, 1994.
  18. A scalable Laplace approximation for neural networks. In 6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings, volume 6. International Conference on Representation Learning, 2018.
  19. Wasserman, L. All of nonparametric statistics. Springer, 2006.

Summary

We haven't generated a summary for this paper yet.