Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging Trust for Joint Multi-Objective and Multi-Fidelity Optimization (2112.13901v3)

Published 27 Dec 2021 in cs.LG, math.OC, physics.acc-ph, and physics.plasm-ph

Abstract: In the pursuit of efficient optimization of expensive-to-evaluate systems, this paper investigates a novel approach to Bayesian multi-objective and multi-fidelity (MOMF) optimization. Traditional optimization methods, while effective, often encounter prohibitively high costs in multi-dimensional optimizations of one or more objectives. Multi-fidelity approaches offer potential remedies by utilizing multiple, less costly information sources, such as low-resolution simulations. However, integrating these two strategies presents a significant challenge. We suggest the innovative use of a trust metric to support simultaneous optimization of multiple objectives and data sources. Our method modifies a multi-objective optimization policy to incorporate the trust gain per evaluation cost as one objective in a Pareto optimization problem, enabling simultaneous MOMF at lower costs. We present and compare two MOMF optimization methods: a holistic approach selecting both the input parameters and the trust parameter jointly, and a sequential approach for benchmarking. Through benchmarks on synthetic test functions, our approach is shown to yield significant cost reductions - up to an order of magnitude compared to pure multi-objective optimization. Furthermore, we find that joint optimization of the trust and objective domains outperforms addressing them in sequential manner. We validate our results using the use case of optimizing laser-plasma acceleration simulations, demonstrating our method's potential in Pareto optimization of high-cost black-box functions. Implementing these methods in existing Bayesian frameworks is simple, and they can be readily extended to batch optimization. With their capability to handle various continuous or discrete fidelity dimensions, our techniques offer broad applicability in solving simulation problems in fields such as plasma physics and fluid dynamics.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. J. Snoek, H. Larochelle, and R. P. Adams, Practical bayesian optimization of machine learning algorithms, Advances in neural information processing systems 25 (2012).
  2. J. Mockus, Application of bayesian approach to numerical methods of global and stochastic optimization, Journal of Global Optimization 4, 347 (1994a).
  3. D. R. Jones, M. Schonlau, and W. J. Welch, Efficient global optimization of expensive black-box functions, Journal of Global optimization 13, 455 (1998).
  4. D. Packwood et al., Bayesian optimization for materials science (Springer, 2017).
  5. C. A. Shoemaker, R. G. Regis, and R. C. Fleming, Watershed calibration using multistart local optimization and evolutionary optimization with radial basis function approximation, Hydrological sciences journal 52, 450 (2007).
  6. S. Sharma and V. Kumar, A comprehensive review on multi-objective optimization techniques: Past, present and future, Archives of Computational Methods in Engineering 29, 5605 (2022).
  7. A. Arias-Montano, C. A. C. Coello, and E. Mezura-Montes, Multiobjective evolutionary algorithms in aeronautical and aerospace engineering, IEEE transactions on evolutionary computation 16, 662 (2012).
  8. A. Avder, İ. Şahin, and M. Dörterler, Multi-objective design optimization of the robot grippers with spea2, International Journal of Intelligent Systems and Applications in Engineering 7, 83 (2019).
  9. C. E. Rasmussen, Gaussian processes in machine learning, in Summer school on machine learning (Springer, 2003) pp. 63–71.
  10. J. Mockus, Application of bayesian approach to numerical methods of global and stochastic optimization, Journal of Global Optimization 4, 347 (1994b).
  11. P. I. Frazier, W. B. Powell, and S. Dayanik, A knowledge-gradient policy for sequential information collection, SIAM Journal on Control and Optimization 47, 2410 (2008).
  12. W. Scott, P. Frazier, and W. Powell, The correlated knowledge gradient for simulation optimization of continuous parameters using gaussian process regression, SIAM Journal on Optimization 21, 996 (2011).
  13. P. Hennig and C. J. Schuler, Entropy search for information-efficient global optimization., Journal of Machine Learning Research 13 (2012).
  14. Z. Wang and S. Jegelka, Max-value entropy search for efficient bayesian optimization, in International Conference on Machine Learning (PMLR, 2017) pp. 3627–3635.
  15. L. C. W. Dixon, The global optimization problem. an introduction, Toward global optimization 2, 1 (1978).
  16. J. Knowles, Parego: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems, IEEE Transactions on Evolutionary Computation 10, 50 (2006).
  17. M. T. Emmerich, K. C. Giannakoglou, and B. Naujoks, Single-and multiobjective evolutionary optimization assisted by gaussian random field metamodels, IEEE Transactions on Evolutionary Computation 10, 421 (2006).
  18. I. Couckuyt, D. Deschrijver, and T. Dhaene, Fast calculation of multiobjective probability of improvement and expected improvement criteria for pareto optimization, Journal of Global Optimization 60, 575 (2014).
  19. C. Luo, K. Shimoyama, and S. Obayashi, Kriging model based many-objective optimization with efficient calculation of expected hypervolume improvement, in 2014 IEEE Congress on Evolutionary Computation (CEC) (IEEE, 2014) pp. 1187–1194.
  20. K. Shimoyama, S. Jeong, and S. Obayashi, Kriging-surrogate-based optimization considering expected hypervolume improvement in non-constrained many-objective test problems, in 2013 IEEE Congress on Evolutionary Computation (IEEE, 2013) pp. 658–665.
  21. M. T. Emmerich, A. H. Deutz, and J. W. Klinkenberg, Hypervolume-based expected improvement: Monotonicity properties and exact computation, in 2011 IEEE Congress of Evolutionary Computation (CEC) (IEEE, 2011) pp. 2147–2154.
  22. S. Daulton, M. Balandat, and E. Bakshy, Differentiable expected hypervolume improvement for parallel multi-objective bayesian optimization, arXiv preprint arXiv:2006.05078  (2020).
  23. M. McLeod, M. A. Osborne, and S. J. Roberts, Practical bayesian optimization for variable cost objectives, arXiv preprint arXiv:1703.04335  (2017).
  24. J. Wu and P. I. Frazier, Continuous-fidelity bayesian optimization with knowledge gradient, in NIPS Workshop on Bayesian Optimization (2017).
  25. S. Belakaria, A. Deshwal, and J. R. Doppa, Multi-fidelity multi-objective bayesian optimization: an output space entropy search approach, in Proceedings of the thirty-fourth AAAI Conference on artificial intelligence (2020) pp. 10035–10043.
  26. J. S. Park, Tuning complex computer codes to data and optimal designs, Ph.D. thesis, University of Illinois at Urbana-Champaign (1991).
  27. J.-L. Vay and R. Lehe, Simulations for Plasma and Laser Acceleration, Reviews of Accelerator Science and Technology 09, 165 (2016).
Citations (7)

Summary

We haven't generated a summary for this paper yet.