Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-fidelity Constrained Optimization for Stochastic Black Box Simulators (2311.15137v1)

Published 25 Nov 2023 in math.OC, cs.LG, and stat.ML

Abstract: Constrained optimization of the parameters of a simulator plays a crucial role in a design process. These problems become challenging when the simulator is stochastic, computationally expensive, and the parameter space is high-dimensional. One can efficiently perform optimization only by utilizing the gradient with respect to the parameters, but these gradients are unavailable in many legacy, black-box codes. We introduce the algorithm Scout-Nd (Stochastic Constrained Optimization for N dimensions) to tackle the issues mentioned earlier by efficiently estimating the gradient, reducing the noise of the gradient estimator, and applying multi-fidelity schemes to further reduce computational effort. We validate our approach on standard benchmarks, demonstrating its effectiveness in optimizing parameters highlighting better performance compared to existing methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Kyle Cranmer, Johann Brehmer and Gilles Louppe “The frontier of simulation-based inference” In Proceedings of the National Academy of Sciences 117.48 National Acad Sciences, 2020, pp. 30055–30062
  2. “The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3234–3243
  3. Jonas Degrave, Michiel Hermans and Joni Dambre “A differentiable physics engine for deep learning in robotics” In Frontiers in neurorobotics Frontiers, 2019, pp. 6
  4. “A probabilistic, data-driven closure model for RANS simulations with aleatoric, model uncertainty”, 2023 arXiv:2307.02432 [physics.flu-dyn]
  5. “End-to-end differentiable physics for learning and control” In Advances in neural information processing systems 31, 2018
  6. Didier Lucor, Atul Agrawal and Anne Sergent “Simple computational strategies for more effective physics-informed neural networks modeling of turbulent natural convection” In Journal of Computational Physics 456 Elsevier, 2022, pp. 111022
  7. Rajesh Ranganath, Sean Gerrish and David Blei “Black box variational inference” In Artificial intelligence and statistics, 2014, pp. 814–822 PMLR
  8. Jorge J Moré and Stefan M Wild “Benchmarking derivative-free optimization algorithms” In SIAM Journal on Optimization 20.1 SIAM, 2009, pp. 172–191
  9. “Genetic programming: an introduction: on the automatic evolution of computer programs and its applications” Morgan Kaufmann Publishers Inc., 1998
  10. Jasper Snoek, Hugo Larochelle and Ryan P Adams “Practical bayesian optimization of machine learning algorithms” In Advances in neural information processing systems 25, 2012
  11. “A trust-region method for derivative-free nonlinear constrained stochastic optimization” In arXiv preprint arXiv:1703.04156, 2017
  12. “Monte carlo gradient estimation in machine learning” In The Journal of Machine Learning Research 21.1 JMLRORG, 2020, pp. 5183–5244
  13. Georg Ch Pflug “Optimization of stochastic models: the interface between simulation and optimization” Springer Science & Business Media, 2012
  14. Gilles Louppe, Joeri Hermans and Kyle Cranmer “Adversarial Variational Optimization of Non-Differentiable Simulators” ISSN: 2640-3498 In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics PMLR, 2019, pp. 1438–1447 URL: https://proceedings.mlr.press/v89/louppe19a.html
  15. “Black-box optimization with local generative surrogates” In Advances in Neural Information Processing Systems 33, 2020, pp. 14650–14662
  16. Nataniel Ruiz, Samuel Schulter and Manmohan Chandraker “Learning to simulate” In arXiv preprint arXiv:1810.02513, 2018
  17. Thomas Bird, Julius Kunze and David Barber “Stochastic Variational Optimization” arXiv:1809.04855 [cs, stat] arXiv, 2018 URL: http://arxiv.org/abs/1809.04855
  18. “Variational Optimization” arXiv:1212.4507 [cs, stat] arXiv, 2012 URL: http://arxiv.org/abs/1212.4507
  19. “Bayesian optimization with inequality constraints.” In ICML 2014, 2014, pp. 937–945
  20. Michael JD Powell “A direct search optimization method that models the objective and constraint functions by linear interpolation” Springer, 1994
  21. “Robust solutions of uncertain linear programs” In Operations research letters 25.1 Elsevier, 1999, pp. 1–13
  22. Dimitris Bertsimas, David B Brown and Constantine Caramanis “Theory and applications of robust optimization” In SIAM review 53.3 SIAM, 2011, pp. 464–501
  23. “Stochastic optimization with inequality constraints using simultaneous perturbations and penalty functions” In 42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475) Maui, HI, USA: IEEE, 2003, pp. 3808–3813 DOI: 10.1109/CDC.2003.1271742
  24. Jorge Nocedal and Stephen J Wright “Numerical optimization” Springer, 1999
  25. Anthony V Fiacco and Garth P McCormick “Nonlinear programming: sequential unconstrained minimization techniques” SIAM, 1990
  26. Mark A Beaumont, Wenyang Zhang and David J Balding “Approximate Bayesian computation in population genetics” In Genetics 162.4 Oxford University Press, 2002, pp. 2025–2035
  27. “Markov chain Monte Carlo without likelihoods” In Proceedings of the National Academy of Sciences 100.26 National Acad Sciences, 2003, pp. 15324–15328
  28. “Optimization by Variational Bounding.” In ESANN, 2013
  29. Peter W Glynn “Likelihood ratio gradient estimation for stochastic systems” In Communications of the ACM 33.10 ACM New York, NY, USA, 1990, pp. 75–84
  30. Ronald J Williams “Simple statistical gradient-following algorithms for connectionist reinforcement learning” In Machine learning 8.3 Springer, 1992, pp. 229–256
  31. Wouter Kool, Herke van Hoof and Max Welling “Buy 4 REINFORCE Samples, Get a Baseline for Free!”, 2022 URL: https://openreview.net/forum?id=r1lgTGL5DE
  32. Josef Dick, Frances Y Kuo and Ian H Sloan “High-dimensional integration: the quasi-Monte Carlo way” In Acta Numerica 22 Cambridge University Press, 2013, pp. 133–288
  33. Benjamin Peherstorfer, Karen Willcox and Max Gunzburger “Survey of multifidelity methods in uncertainty propagation, inference, and optimization” In Siam Review 60.3 SIAM, 2018, pp. 550–591
  34. Michael B Giles “Multilevel monte carlo methods” In Acta numerica 24 Cambridge University Press, 2015, pp. 259–328
  35. “Pytorch: An imperative style, high-performance deep learning library” In Advances in neural information processing systems 32, 2019
  36. Diederik P Kingma and Jimmy Ba “Adam: A method for stochastic optimization” In arXiv preprint arXiv:1412.6980, 2014
Citations (4)

Summary

We haven't generated a summary for this paper yet.