Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Goal-Oriented Bayesian Optimal Experimental Design for Nonlinear Models using Markov Chain Monte Carlo (2403.18072v1)

Published 26 Mar 2024 in stat.CO, cs.LG, stat.ME, and stat.ML

Abstract: Optimal experimental design (OED) provides a systematic approach to quantify and maximize the value of experimental data. Under a Bayesian approach, conventional OED maximizes the expected information gain (EIG) on model parameters. However, we are often interested in not the parameters themselves, but predictive quantities of interest (QoIs) that depend on the parameters in a nonlinear manner. We present a computational framework of predictive goal-oriented OED (GO-OED) suitable for nonlinear observation and prediction models, which seeks the experimental design providing the greatest EIG on the QoIs. In particular, we propose a nested Monte Carlo estimator for the QoI EIG, featuring Markov chain Monte Carlo for posterior sampling and kernel density estimation for evaluating the posterior-predictive density and its Kullback-Leibler divergence from the prior-predictive. The GO-OED design is then found by maximizing the EIG over the design space using Bayesian optimization. We demonstrate the effectiveness of the overall nonlinear GO-OED method, and illustrate its differences versus conventional non-GO-OED, through various test problems and an application of sensor placement for source inversion in a convection-diffusion field.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. An Introduction to MCMC for Machine Learning. Machine Learning, 50:5–43, 2003.
  2. Optimum Experimental Designs, With SAS. Oxford University Press, New York, NY, 2007.
  3. Goal-oriented optimal design of experiments for large-scale Bayesian linear inverse problems. Inverse Problems, 34(9):aad210, 2018.
  4. J. M. Bernardo. Expected Information as Expected Utility. The Annals of Statistics, 7(3):686–690, 1979.
  5. Handbook of Markov Chain Monte Carlo. Chapman and Hall/CRC, 2011.
  6. Combining Push-Forward Measures and Bayes’ Rule to Construct Consistent Solutions to Stochastic Inverse Problems. SIAM Journal on Scientific Computing, 40(2):A984–A1011, 2018.
  7. Convergence of Probability Densities Using Approximate Models for Forward and Inverse Problems in Uncertainty Quantification. SIAM Journal on Scientific Computing, 40(5):A3523–A3548, 2018.
  8. Optimal experimental design for prediction based on push-forward probability measures. Journal of Computational Physics, 416:109518, 2020.
  9. A comparative study of several smoothing methods in density estimation. Computational Statistics & Data Analysis, 17(2):153–176, 1994.
  10. Bayesian updating and uncertainty quantification using sequential tempered MCMC with the rank-one modified Metropolis algorithm. arXiv preprint arXiv:1804.08738, 2018.
  11. K. Chaloner and I. Verdinelli. Bayesian Experimental Design: A Review. Statistical Science, 10(3):273–304, 1995.
  12. Markov chain Monte Carlo convergence diagnostics: a comparative review. Journal of the American Statistical Association, 91(434):883–904, 1996.
  13. Parallel tempering: Theory, applications, and new perspectives. Physical Chemistry Chemical Physics, 7(23):3910–3916, 2005.
  14. V. V. Fedorov. Theory of Optimal Experiments. Academic Press, New York, NY, 1972.
  15. emcee: The MCMC Hammer. Publications of the Astronomical Society of the Pacific, 125(925):306–312, 2013.
  16. P. I. Frazier. A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811, 2018.
  17. J. Goodman and J. Weare. Ensemble samplers with affine invariance. Communications in Applied Mathematics and Computational Science, 5(1):65–80, 2010.
  18. R. B. Gramacy. Surrogates. Chapman and Hall/CRC, mar 2020.
  19. X. Huan and Y. M. Marzouk. Simulation-based optimal Bayesian experimental design for nonlinear systems. Journal of Computational Physics, 232(1):288–317, 2013.
  20. X. Huan and Y. M. Marzouk. Gradient-Based Stochastic Optimization Methods in Bayesian Experimental Design. International Journal for Uncertainty Quantification, 4(6):479–510, 2014.
  21. Efficient global optimization of expensive black-box functions. Journal of Global Optimization, 13:455–492, 1998.
  22. A Brief Survey of Bandwidth Selection for Density Estimation. Journal of the American Statistical Association, 91(433):401–407, 1996.
  23. Generalized parallel tempering on Bayesian inverse problems. Statistics and Computing, 31(5):67, 2021.
  24. B. Leonard. A stable and accurate convective modelling procedure based on quadratic upstream interpolation. Computer Methods in Applied Mechanics and Engineering, 19(1):59–98, jun 1979.
  25. D. V. Lindley. On a Measure of the Information Provided by an Experiment. The Annals of Mathematical Statistics, 27(4):986–1005, 1956.
  26. J. Močkus. On Bayesian methods for seeking the extremum. In Optimization Techniques IFIP Technical Conference, pages 400–404, 1975.
  27. J. Nocedal and S. J. Wright. Numerical Optimization. Springer New York, New York, NY, 2006.
  28. F. Nogueira. Bayesian Optimization: Open source constrained global optimization tool for Python, 2014.
  29. Comparison of data-driven bandwidth selectors. Journal of the American Statistical Association, 85(409):66–72, 1990.
  30. Error bounds for sequential Monte Carlo samplers for multimodal distributions. and Probability, 25(1):310–340, 2019.
  31. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
  32. BOA: The Bayesian Optimization Algorithm. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation - Volume 1, GECCO’99, page 525–532, 1999.
  33. A Framework for Adaptive MCMC Targeting Multimodal Distributions. arXiv preprint arXiv:1812.02609, 2018.
  34. Gaussian Processes for Machine Learning. The MIT Press, Cambridge, MA, 2006.
  35. C. P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer New York, New York, NY, 2004.
  36. V. Roy. Convergence diagnostics for Markov chain Monte Carlo. Annual Review of Statistics and Its Application, 7:387–412, 2020.
  37. Taking the Human Out of the Loop: A Review of Bayesian Optimization. Proceedings of the IEEE, 104(1):148–175, 2016.
  38. Recent advances in Bayesian optimization. ACM Computing Surveys, 55(13s):1–36, 2023.
  39. An efficient method for goal-oriented linear Bayesian optimal experimental design: Application to optimal sensor placement. arXiv preprint arXiv:2102.06627, 2021.
  40. D. Zhan and H. Xing. Expected improvement for expensive optimization: a review. Journal of Global Optimization, 78(3):507–544, 2020.
Citations (3)

Summary

We haven't generated a summary for this paper yet.