- The paper introduces BOIDS, a novel Bayesian Optimization method for high-dimensional spaces combining incumbent-guided direction lines, adaptive line selection via multi-armed bandit, and subspace embedding.
- BOIDS leverages incumbent information from global and personal bests to guide the search directions, unlike methods using purely random directions.
- Experiments on synthetic and real-world problems show that BOIDS outperforms state-of-the-art methods for high-dimensional optimization, supported by theoretical convergence analysis.
The paper introduces a novel high-dimensional Bayesian Optimization (BO) algorithm called High-dimensional \underline{Bayesian \underline{O}ptimization via \underline{I}ncumbent-guided \underline{D}irection Lines and \underline{S}ubspace Embeddings} (BOIDS). BOIDS addresses the challenges of optimizing expensive black-box functions in high-dimensional spaces by integrating incumbent-guided direction lines, adaptive line selection using a multi-armed bandit technique, and subspace embedding.
Key components and contributions:
- Incumbent-Guided Direction Lines: BOIDS uses incumbents, specifically global and personal incumbents inspired by Particle Swarm Optimization (PSO), to guide the search process. The incumbent-guided direction vi(t+1) for the i-th data point xi(t) is defined as:
$\mathbf{v}^{(t+1)}_i = w\bar{\mathbf{x}^{(t)}_i + \mathbf{r}_1 c_1 \bar{\mathbf{p}_i^{(t)} + \mathbf{r}_2 c_2 \bar{\mathbf{g}^{(t)}$
where:
- w, c1, and c2 are coefficients controlling exploration and exploitation.
- r1 and r2 are uniformly random vectors sampled from U([0,1]d).
- xi(t)ˉ is the displacement vector of the i-th point xi(t) relative to its last update.
- pi(t)ˉ is the personal best direction of the i-th point xi(t).
- g(t)ˉ is the global best direction.
This approach contrasts with LineBO, which uses uniformly random directions, by exploiting information from the incumbents to guide the search towards promising regions.
- Adaptive Line Selection: The algorithm uses a multi-armed bandit (MAB) approach with Thompson Sampling (TS) to select the most optimal line from a set of m incumbent-guided lines. The reward ri of line Li is defined as the maximum TS values among all data points on Li, i.e., ri=x∈Limaxg(x), where g is a random realization from the Gaussian Process (GP) posterior. The line Li∗ is selected by maximizing the following equation:
$i^* = \argmax_{i=1,\dots,m}{\max_{\mathbf{x}\in \mathcal{L}_i}{g(\mathbf{x})} \quad \text{where} \quad g \sim \mathcal{GP}(\mathcal{D})$
- Incumbent-Guided Line-Based Optimization: BOIDS optimizes an acquisition function α(.) (e.g., EI, TS) in the entire search space X while imposing Euclidean distance constraints to the personal incumbent of $\hat{\mathbf{x}_{i^*}^{(t)}$ and the current global incumbents: $L_{\mathbf{p}(\mathbf{x}) = \Vert\mathbf{x}- \mathbf{p}_{i^*}^{(t)}\Vert$ and $L_{\mathbf{g}(\mathbf{x}) = \Vert\mathbf{x} - \mathbf{g}^{(t)}\Vert$. This is formulated as a multi-objective (MO) optimization problem:
$\mathcal{P} = \argmax_{\mathbf{x}\in\mathcal{X}{ \big( f_\alpha(\mathbf{x}), f_{\mathbf{p}(\mathbf{x}), f_{\mathbf{g}(\mathbf{x}) \big)}$
where fα(.)=α(.), fp(.)=−Lp(.) and fg(.)=−Lg(.). The solution with the best acquisition objective on the Pareto front is selected as the next data point.
- Subspace Embedding: To further improve scalability, BOIDS incorporates a linear random subspace embedding strategy, specifically the adaptive expanding subspace embedding known as BAxUS. This reduces the dimensionality of the optimization problem by projecting data points from a low-dimensional subspace A to the original input space X via a projection matrix S.
- Theoretical Analysis: The paper provides theoretical analysis of BOIDS, including local and global convergence properties. It derives a sub-linear simple regret bound for the incumbent-guided line-based optimization, showing convergence to a local optimum. The simple regret rt is bounded as follows:
E[rT]≤O((dlogT/T)1/2−κ)
where d is the input dimension and κ depends on the smoothness of the kernel. The global convergence property is analyzed using the probability of the BAxUS embedding to contain the global optimum.
- Experimental Results: The proposed method is evaluated against state-of-the-art baselines, including standard BO, LineBO, BAxUS, HESBO, ALEBO, SAASBO, TuRBO, CMA-BO, RDUCB, CMA-ES and PSO, on a range of synthetic and real-world benchmark problems with dimensions ranging from 100 to 500. The results demonstrate that BOIDS outperforms these baselines on most problems. Ablation studies are conducted to assess the contribution of each component of BOIDS. Removing the incumbent-guided line component and the tailored line-based optimization significantly degrades performance.
In summary, BOIDS presents a method for high-dimensional black-box optimization that combines incumbent-guided search directions with multi-objective acquisition optimization and subspace embedding. The theoretical analysis and empirical results support the effectiveness of the proposed approach.