Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BOIDS: High-dimensional Bayesian Optimization via Incumbent-guided Direction Lines and Subspace Embeddings (2412.12918v1)

Published 17 Dec 2024 in stat.ML and cs.LG

Abstract: When it comes to expensive black-box optimization problems, Bayesian Optimization (BO) is a well-known and powerful solution. Many real-world applications involve a large number of dimensions, hence scaling BO to high dimension is of much interest. However, state-of-the-art high-dimensional BO methods still suffer from the curse of dimensionality, highlighting the need for further improvements. In this work, we introduce BOIDS, a novel high-dimensional BO algorithm that guides optimization by a sequence of one-dimensional direction lines using a novel tailored line-based optimization procedure. To improve the efficiency, we also propose an adaptive selection technique to identify most optimal lines for each round of line-based optimization. Additionally, we incorporate a subspace embedding technique for better scaling to high-dimensional spaces. We further provide theoretical analysis of our proposed method to analyze its convergence property. Our extensive experimental results show that BOIDS outperforms state-of-the-art baselines on various synthetic and real-world benchmark problems.

The paper introduces a novel high-dimensional Bayesian Optimization (BO) algorithm called High-dimensional \underline{Bayesian \underline{O}ptimization via \underline{I}ncumbent-guided \underline{D}irection Lines and \underline{S}ubspace Embeddings} (BOIDS). BOIDS addresses the challenges of optimizing expensive black-box functions in high-dimensional spaces by integrating incumbent-guided direction lines, adaptive line selection using a multi-armed bandit technique, and subspace embedding.

Key components and contributions:

  • Incumbent-Guided Direction Lines: BOIDS uses incumbents, specifically global and personal incumbents inspired by Particle Swarm Optimization (PSO), to guide the search process. The incumbent-guided direction vi(t+1)\mathbf{v}^{(t+1)}_i for the ii-th data point xi(t)\mathbf{x}^{(t)}_i is defined as:

    $\mathbf{v}^{(t+1)}_i = w\bar{\mathbf{x}^{(t)}_i + \mathbf{r}_1 c_1 \bar{\mathbf{p}_i^{(t)} + \mathbf{r}_2 c_2 \bar{\mathbf{g}^{(t)}$

    where:

    • ww, c1c_1, and c2c_2 are coefficients controlling exploration and exploitation.
    • r1\mathbf{r}_1 and r2\mathbf{r}_2 are uniformly random vectors sampled from U([0,1]d)\mathcal{U}([0,1]^d).
    • xi(t)ˉ\bar{\mathbf{x}^{(t)}_i} is the displacement vector of the ii-th point xi(t)\mathbf{x}^{(t)}_i relative to its last update.
    • pi(t)ˉ\bar{\mathbf{p}_i^{(t)}} is the personal best direction of the ii-th point xi(t)\mathbf{x}^{(t)}_i.
    • g(t)ˉ\bar{\mathbf{g}^{(t)}} is the global best direction.

    This approach contrasts with LineBO, which uses uniformly random directions, by exploiting information from the incumbents to guide the search towards promising regions.

  • Adaptive Line Selection: The algorithm uses a multi-armed bandit (MAB) approach with Thompson Sampling (TS) to select the most optimal line from a set of mm incumbent-guided lines. The reward rir_i of line Li\mathcal{L}_i is defined as the maximum TS values among all data points on Li\mathcal{L}_i, i.e., ri=maxxLig(x)r_i=\max_{\mathbf{x}\in\mathcal{L}_i}{g(\mathbf{x})}, where gg is a random realization from the Gaussian Process (GP) posterior. The line Li\mathcal{L}_{i^*} is selected by maximizing the following equation:

    $i^* = \argmax_{i=1,\dots,m}{\max_{\mathbf{x}\in \mathcal{L}_i}{g(\mathbf{x})} \quad \text{where} \quad g \sim \mathcal{GP}(\mathcal{D})$

  • Incumbent-Guided Line-Based Optimization: BOIDS optimizes an acquisition function α(.)\alpha(.) (e.g., EI, TS) in the entire search space X\mathcal{X} while imposing Euclidean distance constraints to the personal incumbent of $\hat{\mathbf{x}_{i^*}^{(t)}$ and the current global incumbents: $L_{\mathbf{p}(\mathbf{x}) = \Vert\mathbf{x}- \mathbf{p}_{i^*}^{(t)}\Vert$ and $L_{\mathbf{g}(\mathbf{x}) = \Vert\mathbf{x} - \mathbf{g}^{(t)}\Vert$. This is formulated as a multi-objective (MO) optimization problem:

    $\mathcal{P} = \argmax_{\mathbf{x}\in\mathcal{X}{ \big( f_\alpha(\mathbf{x}), f_{\mathbf{p}(\mathbf{x}), f_{\mathbf{g}(\mathbf{x}) \big)}$

    where fα(.)=α(.)f_\alpha(.) = \alpha(.), fp(.)=Lp(.)f_\mathbf{p}(.)=-L_\mathbf{p}(.) and fg(.)=Lg(.)f_\mathbf{g}(.)=-L_\mathbf{g}(.). The solution with the best acquisition objective on the Pareto front is selected as the next data point.

  • Subspace Embedding: To further improve scalability, BOIDS incorporates a linear random subspace embedding strategy, specifically the adaptive expanding subspace embedding known as BAxUS. This reduces the dimensionality of the optimization problem by projecting data points from a low-dimensional subspace A\mathcal{A} to the original input space X\mathcal{X} via a projection matrix S\mathbf{S}.
  • Theoretical Analysis: The paper provides theoretical analysis of BOIDS, including local and global convergence properties. It derives a sub-linear simple regret bound for the incumbent-guided line-based optimization, showing convergence to a local optimum. The simple regret rtr_t is bounded as follows:

    E[rT]O((dlogT/T)1/2κ)\mathbb{E}[r_T] \le \mathcal{O}((d \log{T}/ T)^{1/2-\kappa} )

    where dd is the input dimension and κ\kappa depends on the smoothness of the kernel. The global convergence property is analyzed using the probability of the BAxUS embedding to contain the global optimum.

  • Experimental Results: The proposed method is evaluated against state-of-the-art baselines, including standard BO, LineBO, BAxUS, HESBO, ALEBO, SAASBO, TuRBO, CMA-BO, RDUCB, CMA-ES and PSO, on a range of synthetic and real-world benchmark problems with dimensions ranging from 100 to 500. The results demonstrate that BOIDS outperforms these baselines on most problems. Ablation studies are conducted to assess the contribution of each component of BOIDS. Removing the incumbent-guided line component and the tailored line-based optimization significantly degrades performance.

In summary, BOIDS presents a method for high-dimensional black-box optimization that combines incumbent-guided search directions with multi-objective acquisition optimization and subspace embedding. The theoretical analysis and empirical results support the effectiveness of the proposed approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Lam Ngo (3 papers)
  2. Huong Ha (19 papers)
  3. Jeffrey Chan (49 papers)
  4. Hongyu Zhang (147 papers)