Event-Triggered Time-Varying Bayesian Optimization (2208.10790v5)
Abstract: We consider the problem of sequentially optimizing a time-varying objective function using time-varying Bayesian optimization (TVBO). To cope with stale data arising from time variations, current approaches to TVBO require prior knowledge of a constant rate of change. However, in practice, the rate of change is usually unknown. We propose an event-triggered algorithm, ET-GP-UCB, that treats the optimization problem as static until it detects changes in the objective function and then resets the dataset. This allows the algorithm to adapt online to realized temporal changes without the need for exact prior knowledge. The event trigger is based on probabilistic uniform error bounds used in Gaussian process regression. We derive regret bounds of adaptive resets without exact prior knowledge on the temporal changes, and show in numerical experiments that ET-GP-UCB outperforms state-of-the-art algorithms on both synthetic and real-world data. The results demonstrate that ET-GP-UCB is readily applicable to various settings without extensive hyperparameter tuning.
- BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization. In Advances in Neural Information Processing Systems, volume 33.
- Non-Stationary Stochastic Optimization. Operations Research, 63(5): 1227–1244.
- Time-Varying Gaussian Process Bandit Optimization. In Proceedings of The 19th International Conference on Artificial Intelligence and Statistics, volume 51, 314–323. PMLR.
- On Controller Tuning with Time-Varying Bayesian Optimization. In 61th IEEE Conference on Decision and Control.
- On kernelized multi-armed bandits. In International Conference on Machine Learning, 844–853. PMLR.
- Weighted Gaussian Process Bandits for Non-stationary Environments. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, volume 151, 6909–6932. PMLR.
- GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc.
- Garnett, R. 2023. Bayesian Optimization. Cambridge University Press.
- Posterior consistency of Gaussian process prior for nonparametric binary regression. The Annals of Statistics, 34(5): 2413 – 2429.
- Entropy Search for Information-Efficient Global Optimization. Journal of Machine Learning Research, 13: 1809–1837.
- Contextual Gaussian Process Bandit Optimization. In Advances in Neural Information Processing Systems, volume 24. Curran Associates, Inc.
- Safe and Efficient Model-free Adaptive Control via Bayesian Optimization. In IEEE International Conference on Robotics and Automation, 9782–9788.
- Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL. In Advances in Neural Information Processing Systems, volume 34, 15513–15528. Curran Associates, Inc.
- Provably Efficient Online Hyperparameter Optimization with Population-Based Bandits. In Advances in Neural Information Processing Systems, volume 33, 17200–17211. Curran Associates, Inc.
- Event-triggered Learning. Automatica, 117.
- Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. In Proceedings of the 27th International Conference on International Conference on Machine Learning, 1015–1022. Omnipress.
- Autonomous Vehicle Control Through the Dynamics and Controller Learning. IEEE Transactions on Vehicular Technology, 67(7): 5650–5657.
- Feedback linearization based on Gaussian processes with event-triggered online learning. IEEE Transactions on Automatic Control, 65(10): 4154–4169.
- Non-stationary reinforcement learning without prior knowledge: An optimal black-box approach. In Conference on Learning Theory, 4300–4354. PMLR.
- No-Regret Algorithms for Time-Varying Bayesian Optimization. In Annual Conference on Information Sciences and Systems, 1–6. IEEE.