Multi-fidelity Constrained Optimization for Stochastic Black Box Simulators (2311.15137v1)
Abstract: Constrained optimization of the parameters of a simulator plays a crucial role in a design process. These problems become challenging when the simulator is stochastic, computationally expensive, and the parameter space is high-dimensional. One can efficiently perform optimization only by utilizing the gradient with respect to the parameters, but these gradients are unavailable in many legacy, black-box codes. We introduce the algorithm Scout-Nd (Stochastic Constrained Optimization for N dimensions) to tackle the issues mentioned earlier by efficiently estimating the gradient, reducing the noise of the gradient estimator, and applying multi-fidelity schemes to further reduce computational effort. We validate our approach on standard benchmarks, demonstrating its effectiveness in optimizing parameters highlighting better performance compared to existing methods.
- Kyle Cranmer, Johann Brehmer and Gilles Louppe “The frontier of simulation-based inference” In Proceedings of the National Academy of Sciences 117.48 National Acad Sciences, 2020, pp. 30055–30062
- “The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3234–3243
- Jonas Degrave, Michiel Hermans and Joni Dambre “A differentiable physics engine for deep learning in robotics” In Frontiers in neurorobotics Frontiers, 2019, pp. 6
- “A probabilistic, data-driven closure model for RANS simulations with aleatoric, model uncertainty”, 2023 arXiv:2307.02432 [physics.flu-dyn]
- “End-to-end differentiable physics for learning and control” In Advances in neural information processing systems 31, 2018
- Didier Lucor, Atul Agrawal and Anne Sergent “Simple computational strategies for more effective physics-informed neural networks modeling of turbulent natural convection” In Journal of Computational Physics 456 Elsevier, 2022, pp. 111022
- Rajesh Ranganath, Sean Gerrish and David Blei “Black box variational inference” In Artificial intelligence and statistics, 2014, pp. 814–822 PMLR
- Jorge J Moré and Stefan M Wild “Benchmarking derivative-free optimization algorithms” In SIAM Journal on Optimization 20.1 SIAM, 2009, pp. 172–191
- “Genetic programming: an introduction: on the automatic evolution of computer programs and its applications” Morgan Kaufmann Publishers Inc., 1998
- Jasper Snoek, Hugo Larochelle and Ryan P Adams “Practical bayesian optimization of machine learning algorithms” In Advances in neural information processing systems 25, 2012
- “A trust-region method for derivative-free nonlinear constrained stochastic optimization” In arXiv preprint arXiv:1703.04156, 2017
- “Monte carlo gradient estimation in machine learning” In The Journal of Machine Learning Research 21.1 JMLRORG, 2020, pp. 5183–5244
- Georg Ch Pflug “Optimization of stochastic models: the interface between simulation and optimization” Springer Science & Business Media, 2012
- Gilles Louppe, Joeri Hermans and Kyle Cranmer “Adversarial Variational Optimization of Non-Differentiable Simulators” ISSN: 2640-3498 In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics PMLR, 2019, pp. 1438–1447 URL: https://proceedings.mlr.press/v89/louppe19a.html
- “Black-box optimization with local generative surrogates” In Advances in Neural Information Processing Systems 33, 2020, pp. 14650–14662
- Nataniel Ruiz, Samuel Schulter and Manmohan Chandraker “Learning to simulate” In arXiv preprint arXiv:1810.02513, 2018
- Thomas Bird, Julius Kunze and David Barber “Stochastic Variational Optimization” arXiv:1809.04855 [cs, stat] arXiv, 2018 URL: http://arxiv.org/abs/1809.04855
- “Variational Optimization” arXiv:1212.4507 [cs, stat] arXiv, 2012 URL: http://arxiv.org/abs/1212.4507
- “Bayesian optimization with inequality constraints.” In ICML 2014, 2014, pp. 937–945
- Michael JD Powell “A direct search optimization method that models the objective and constraint functions by linear interpolation” Springer, 1994
- “Robust solutions of uncertain linear programs” In Operations research letters 25.1 Elsevier, 1999, pp. 1–13
- Dimitris Bertsimas, David B Brown and Constantine Caramanis “Theory and applications of robust optimization” In SIAM review 53.3 SIAM, 2011, pp. 464–501
- “Stochastic optimization with inequality constraints using simultaneous perturbations and penalty functions” In 42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475) Maui, HI, USA: IEEE, 2003, pp. 3808–3813 DOI: 10.1109/CDC.2003.1271742
- Jorge Nocedal and Stephen J Wright “Numerical optimization” Springer, 1999
- Anthony V Fiacco and Garth P McCormick “Nonlinear programming: sequential unconstrained minimization techniques” SIAM, 1990
- Mark A Beaumont, Wenyang Zhang and David J Balding “Approximate Bayesian computation in population genetics” In Genetics 162.4 Oxford University Press, 2002, pp. 2025–2035
- “Markov chain Monte Carlo without likelihoods” In Proceedings of the National Academy of Sciences 100.26 National Acad Sciences, 2003, pp. 15324–15328
- “Optimization by Variational Bounding.” In ESANN, 2013
- Peter W Glynn “Likelihood ratio gradient estimation for stochastic systems” In Communications of the ACM 33.10 ACM New York, NY, USA, 1990, pp. 75–84
- Ronald J Williams “Simple statistical gradient-following algorithms for connectionist reinforcement learning” In Machine learning 8.3 Springer, 1992, pp. 229–256
- Wouter Kool, Herke van Hoof and Max Welling “Buy 4 REINFORCE Samples, Get a Baseline for Free!”, 2022 URL: https://openreview.net/forum?id=r1lgTGL5DE
- Josef Dick, Frances Y Kuo and Ian H Sloan “High-dimensional integration: the quasi-Monte Carlo way” In Acta Numerica 22 Cambridge University Press, 2013, pp. 133–288
- Benjamin Peherstorfer, Karen Willcox and Max Gunzburger “Survey of multifidelity methods in uncertainty propagation, inference, and optimization” In Siam Review 60.3 SIAM, 2018, pp. 550–591
- Michael B Giles “Multilevel monte carlo methods” In Acta numerica 24 Cambridge University Press, 2015, pp. 259–328
- “Pytorch: An imperative style, high-performance deep learning library” In Advances in neural information processing systems 32, 2019
- Diederik P Kingma and Jimmy Ba “Adam: A method for stochastic optimization” In arXiv preprint arXiv:1412.6980, 2014