Papers
Topics
Authors
Recent
2000 character limit reached

Learning Search Space Partition for Black-box Optimization using Monte Carlo Tree Search

Published 1 Jul 2020 in cs.LG, cs.AI, cs.RO, math.OC, and stat.ML | (2007.00708v2)

Abstract: High dimensional black-box optimization has broad applications but remains a challenging problem to solve. Given a set of samples ${\vx_i, y_i}$, building a global model (like Bayesian Optimization (BO)) suffers from the curse of dimensionality in the high-dimensional search space, while a greedy search may lead to sub-optimality. By recursively splitting the search space into regions with high/low function values, recent works like LaNAS shows good performance in Neural Architecture Search (NAS), reducing the sample complexity empirically. In this paper, we coin LA-MCTS that extends LaNAS to other domains. Unlike previous approaches, LA-MCTS learns the partition of the search space using a few samples and their function values in an online fashion. While LaNAS uses linear partition and performs uniform sampling in each region, our LA-MCTS adopts a nonlinear decision boundary and learns a local model to pick good candidates. If the nonlinear partition function and the local model fits well with ground-truth black-box function, then good partitions and candidates can be reached with much fewer samples. LA-MCTS serves as a \emph{meta-algorithm} by using existing black-box optimizers (e.g., BO, TuRBO) as its local models, achieving strong performance in general black-box optimization and reinforcement learning benchmarks, in particular for high-dimensional problems.

Citations (108)

Summary

  • The paper introduces LA-MCTS, which partitions the search space using non-linear decision boundaries to enhance optimization efficiency.
  • It integrates local models like Bayesian Optimization and TuRBO to reduce sample complexity while avoiding over-exploration.
  • Empirical evaluations on benchmarks such as MuJoCo demonstrate that LA-MCTS outperforms state-of-the-art methods on high-dimensional tasks.

This paper presents a novel approach, Latent Action Monte Carlo Tree Search (LA-MCTS), aimed at enhancing the efficiency of high-dimensional black-box optimization. The authors focus on addressing the challenges of black-box optimization, which is frequently encountered in fields such as Neural Architecture Search (NAS), robotics, and reinforcement learning.

Methodology Overview

LA-MCTS innovatively combines concepts from tree search algorithms and machine learning. The core idea revolves around learning to partition the search space effectively. By dividing this space into high and low performing regions, LA-MCTS aims to concentrate the search in promising areas, thereby reducing sample complexity. Unlike prior initiatives such as LaNAS, which utilize linear boundaries, LA-MCTS incorporates nonlinear decision boundaries, offering greater flexibility and adaptability to various problem landscapes.

Integration with Local Models

A crucial feature of LA-MCTS is its capability to function as a meta-algorithm. It leverages existing black-box optimizers, like Bayesian Optimization (BO) and TuRBO, as local models within each partitioned sub-region. This integration allows LA-MCTS not only to adapt to diverse problem structures but also to refine boundaries based on real-time feedback, improving optimization efficiency particularly in high-dimensional contexts.

Empirical Evaluations and Results

The experimental results presented in the paper demonstrate the efficacy of LA-MCTS across a range of benchmarks, including complex reinforcement learning tasks like MuJoCo locomotion. LA-MCTS consistently outperforms several state-of-the-art (SoTA) optimizers, including BO variants and evolutionary algorithms, particularly in scenarios with higher dimensionality. In comparisons with conventional methods such as CMA-ES and VOO, LA-MCTS exhibits superior performance by effectively narrowing the search region to focus on promising candidates.

The paper further validates LA-MCTS as an effective meta-algorithm by showing its performance when paired with various solvers. The results indicate improved sample efficiency and robustness, particularly highlighting its ability to prevent over-exploration in large search spaces.

Implications and Future Directions

The introduction of LA-MCTS contributes significantly to the landscape of high-dimensional black-box optimization. Its methodology of leveraging latent actions and adaptive partitioning offers a practical alternative to traditional space partitioning methods, which often lack objective adaptability.

From a theoretical perspective, the approach exemplifies the intersection of machine learning and optimization, paving the way for further exploration into automated partitioning strategies that could enhance optimization frameworks. Practically, LA-MCTS has promising implications for various application domains, including automatic tuning in distributed systems and complex robotic environments.

Looking ahead, the authors suggest possible extensions into multi-objective optimization, which could broaden the utility of LA-MCTS in real-world scenarios. The exploration of alternative models and decision boundaries may also yield additional enhancements in efficiency and scalability.

In summary, LA-MCTS represents a substantial advance in black-box optimization, combining learned adaptive partitioning with powerful existing optimization techniques to handle the challenges of high-dimensional search spaces effectively.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.