Papers
Topics
Authors
Recent
2000 character limit reached

Large Scale Multi-Task Bayesian Optimization with Large Language Models (2503.08131v2)

Published 11 Mar 2025 in cs.LG

Abstract: In multi-task Bayesian optimization, the goal is to leverage experience from optimizing existing tasks to improve the efficiency of optimizing new ones. While approaches using multi-task Gaussian processes or deep kernel transfer exist, the performance improvement is marginal when scaling beyond a moderate number of tasks. We introduce a novel approach leveraging LLMs to learn from, and improve upon, previous optimization trajectories, scaling to approximately 1500 distinct tasks. Specifically, we propose a feedback loop in which an LLM is fine-tuned on the high quality solutions to specific tasks found by Bayesian optimization (BO). This LLM is then used to generate initialization points for future BO searches for new tasks. The trajectories of these new searches provide additional training data for fine-tuning the LLM, completing the loop. We evaluate our method on two distinct domains: database query optimization and antimicrobial peptide design. Results demonstrate that our approach creates a positive feedback loop, where the LLM's generated initializations gradually improve, leading to better optimization performance. As this feedback loop continues, we find that the LLM is eventually able to generate solutions to new tasks in just a few shots that are better than the solutions produced by "from scratch" by Bayesian optimization while simultaneously requiring significantly fewer oracle calls.

Summary

  • The paper introduces BOLT, a method that fine-tunes LLMs with BayesOpt trajectories to improve task initialization and convergence in multi-task optimization.
  • It integrates feedback loops linking initial BO tasks and LLM fine-tuning, reducing oracle queries while maintaining high performance.
  • Empirical results show superior efficiency in database query and antimicrobial peptide design, confirming robust multi-task optimization.

Large Scale Multi-Task Bayesian Optimization with LLMs

Introduction

The paper "Large Scale Multi-Task Bayesian Optimization with LLMs" (2503.08131) presents a novel method for executing multi-task Bayesian optimization (BO) at scale by utilizing LLMs. Traditional approaches, which often include multi-task Gaussian processes or deep kernel transfer techniques, offer marginal gains when faced with a voluminous array of tasks. In contrast, this paper introduces BOLT, which leverages LLM fine-tuning on optimization trajectories from BayesOpt to construct improved initialization methods, achieving superior convergence on future tasks.

Multi-task Bayesian optimization is crucial in scenarios where related problems frequently arise, such as database query optimization and antibiotic peptide design. The goal is to use insights from past solutions to streamline the process of optimizing new objectives.

Methodology

The BOLT framework revolves around two integrated components: 1) an LLM fine-tuning process based on collected high-quality solutions, and 2) the use of these models to kickstart new optimization tasks. After optimizing initial tasks with BO, the LLM is fine-tuned with top solutions to enhance future initializations. Hence, a feedback loop is created where improved initializations lead to better optimization trajectories, which in turn refine the LLM further.

These optimized trajectories and their corresponding task contexts serve to train the LLM as an initialization policy. This iterative loop allows BOLT to scale efficiently, requiring substantially fewer oracle queries than conventional methods. The method stands agnostic to the specifics of the underlying BO procedure, therefore allowing its integration with a variety of recent BO improvements.

Results

BOLT was experimentally validated across two domains: database query optimization and antimicrobial peptide design. Both domains contain numerous related tasks which benefit from BOLT's approach. The results exhibited a pronounced improvement in BO efficiency and effectiveness with BOLT-generated initializations, significantly surpassing existing methods in quality and speed. Figure 1

Figure 1: Bayesian optimization performance on (Left) query plan optimization and (Right) antimicrobial peptide design. In both settings, BOLT outperforms or matches baselines with just initialization data before optimization begins.

Figure 2

Figure 2: Evaluating BOLT in the few shot setting and comparing to full optimization runs in both problem settings (Left: query plan optimization; Right: peptide design). In each plot, we show objective values accumulated across all validation tasks for various methods.

Discussion and Limitations

This research outlines significant contributions to multi-task BO by innovating the initialization process through LLMs. BOLT's ability to refine optimization efficiency as more tasks are introduced without performance degradation underscores its robustness. The presence of a task description context that LLMs can utilize is a core assumption, which may limit application scope in some areas such as hyperparameter optimization.

The computational cost associated with LLM fine-tuning, although efficient, still remains a consideration in terms of monetary and resource implications. Nonetheless, the findings suggest promise for further extension into domains where rigorous optimization is required.

Conclusion

The paper successfully introduces BOLT as a viable alternative to traditional multi-task BO protocols by leveraging LLMs for improved initialization in a scalable manner. It demonstrates through empirical results how this methodology not only mitigates the saturation observed in existing methods as the number of tasks increases, but also enhances the initialization quality to often outperform full BO runs. Thus, it posits a forward path to applying LLMs in large-scale complex optimization tasks with reduced oracle querying. The implications of these findings extend both theoretically and practically, suggesting avenues for future exploration in AI optimization frameworks.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.