Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement (2105.08195v2)

Published 17 May 2021 in cs.LG, cs.AI, and stat.ML

Abstract: Optimizing multiple competing black-box objectives is a challenging problem in many fields, including science, engineering, and machine learning. Multi-objective Bayesian optimization (MOBO) is a sample-efficient approach for identifying the optimal trade-offs between the objectives. However, many existing methods perform poorly when the observations are corrupted by noise. We propose a novel acquisition function, NEHVI, that overcomes this important practical limitation by applying a Bayesian treatment to the popular expected hypervolume improvement (EHVI) criterion and integrating over this uncertainty in the Pareto frontier. We argue that, even in the noiseless setting, generating multiple candidates in parallel is an incarnation of EHVI with uncertainty in the Pareto frontier and therefore can be addressed using the same underlying technique. Through this lens, we derive a natural parallel variant, $q$NEHVI, that reduces computational complexity of parallel EHVI from exponential to polynomial with respect to the batch size. $q$NEHVI is one-step Bayes-optimal for hypervolume maximization in both noisy and noiseless environments, and we show that it can be optimized effectively with gradient-based methods via sample average approximation. Empirically, we demonstrate not only that $q$NEHVI is substantially more robust to observation noise than existing MOBO approaches, but also that it achieves state-of-the-art optimization performance and competitive wall-times in large-batch environments.

Citations (122)

Summary

  • The paper proposes NEHVI, extending traditional EHVI to robustly manage noisy observations in multi-objective Bayesian optimization.
  • It achieves scalability via a cache-based box decomposition technique that reduces computational complexity for parallel evaluations.
  • Empirical results show NEHVI outperforms state-of-the-art methods on benchmark problems, demonstrating enhanced efficiency in real-world scenarios.

Scalable Multi-Objective Bayesian Optimization via Parallel Noisy Expected Hypervolume Improvement

This paper presents a novel approach for handling multi-objective Bayesian optimization (MOBO) in settings characterized by noisy observations and demands for parallel evaluation. The proposed method, termed Noisy Expected Hypervolume Improvement (NEHVI), extends the traditional expected hypervolume improvement (EHVI) to accommodate noise efficiently. By integrating over uncertainty in both objectives and constraints, NEHVI proves robust in noisy environments and scales well in parallel batch evaluations.

Key Contributions

  • Novel Acquisition Function: The NEHVI acquisition function is designed to address the limitations of existing MOBO approaches when faced with noisy objective observations. By incorporating a Bayesian approach, NEHVI effectively deals with observation noise which has historically diminished the performance of hypervolume-based acquisition functions.
  • Parallelism and Scalability: NEHVI reduces the computational complexity from exponential to polynomial concerning batch size, making it suitable for large-scale applications. This reduction in complexity is achieved through a cache-based optimization technique using box decompositions (CBD), which allows for repeated evaluations of the integrals required for acquisition functions without redundant recalculations.
  • Algorithm Performance: The paper empirically shows that NEHVI surpasses current state-of-the-art methods in terms of optimization performance, particularly in settings requiring high parallelism and handling noise. NEHVI consistently achieves better results across a range of benchmark problems compared to other MOBO methods like DGEMO, MESMO, and PFES.
  • Implementation and Open-Source Code: The authors commit to releasing their implementation of NEHVI as open-source software, enhancing reproducibility and encouraging further development and exploration of scalable MOBO methods.

Experimental Results

The paper presents extensive evaluations across synthetic and real-world problems, including the DTLZ2 problem and Adaptive Bitrate (ABR) control policy optimization. NEHVI demonstrates superior robustness to various noise levels and maintains competitive performance in terms of computational wall-time, especially when leveraging GPUs for large-batch evaluations. The CBD approach particularly shines in enabling tractable evaluations for large batch sizes, illustrating NEHVI's applicability in high-dimensional, real-world settings.

Implications and Future Directions

NEHVI addresses key practical problems in MOBO by facilitating efficient and effective optimization in noisy, parallel evaluation settings. This advancement opens pathways for its application in fields such as materials design, automated machine learning, and other domains requiring the balancing of multiple, often conflicting objectives. Future development could focus on extending this framework to more complex environments, incorporating additional constraints, or improving computational efficiency further, potentially through approximations like random fourier features in high-dimensional settings. The authors posit that NEHVI can serve as a foundational building block for future MOBO methodologies, integrating seamlessly with emerging AI technologies and applications.

Overall, this research introduces significant improvements in managing noisy observations within MOBO, offering enhanced scalability and efficacy for solving complex problems in varied scientific and engineering fields.

Youtube Logo Streamline Icon: https://streamlinehq.com