Cumulative Regret Analysis of the Piyavskii--Shubert Algorithm and Its Variants for Global Optimization (2108.10859v2)
Abstract: We study the problem of global optimization, where we analyze the performance of the Piyavskii--Shubert algorithm and its variants. For any given time duration $T$, instead of the extensively studied simple regret (which is the difference of the losses between the best estimate up to $T$ and the global minimum), we study the cumulative regret up to time $T$. For $L$-Lipschitz continuous functions, we show that the cumulative regret is $O(L\log T)$. For $H$-Lipschitz smooth functions, we show that the cumulative regret is $O(H)$. We analytically extend our results for functions with Holder continuous derivatives, which cover both the Lipschitz continuous and the Lipschitz smooth functions, individually. We further show that a simpler variant of the Piyavskii-Shubert algorithm performs just as well as the traditional variants for the Lipschitz continuous or the Lipschitz smooth functions. We further extend our results to broader classes of functions, and show that, our algorithm efficiently determines its queries; and achieves nearly minimax optimal (up to log factors) cumulative regret, for general convex or even concave regularity conditions on the extrema of the objective (which encompasses many preceding regularities). We consider further extensions by investigating the performance of the Piyavskii-Shubert variants in the scenarios with unknown regularity, noisy evaluation and multivariate domain.
- J. D. Pintér, “Global optimization in action,” Scientific American, vol. 264, pp. 54–63, 1991.
- D. R. Jones, M. Schonlau, and W. J. Welch, “Efficient global optimization of expensive black-box functions,” Journal of Global optimization, vol. 13, no. 4, pp. 455–492, 1998.
- L. M. Rios and N. V. Sahinidis, “Derivative-free optimization: a review of algorithms and comparison of software implementations,” Journal of Global Optimization, vol. 56, no. 3, pp. 1247–1293, 2013.
- S. Bubeck, “Convex optimization: Algorithms and complexity,” Foundations and Trends® in Machine Learning, vol. 8, no. 3-4, pp. 231–357, 2015.
- P. Hansen, B. Jaumard, and S.-H. Lu, “Global optimization of univariate lipschitz functions: I. survey and properties,” Mathematical programming, vol. 55, no. 1, pp. 251–272, 1992.
- ——, “On the number of iterations of piyavskii’s global optimization algorithm,” Mathematics of Operations Research, vol. 16, no. 2, pp. 334–350, 1991.
- D. R. Jones, C. D. Perttunen, and B. E. Stuckman, “Lipschitzian optimization without the lipschitz constant,” Journal of optimization Theory and Applications, vol. 79, no. 1, pp. 157–181, 1993.
- P. Jain and P. Kar, “Non-convex optimization for machine learning,” Foundations and Trends® in Machine Learning, vol. 10, no. 3-4, pp. 142–336, 2017.
- P. Basso, “Iterative methods for the localization of the global maximum,” SIAM Journal on Numerical Analysis, vol. 19, no. 4, pp. 781–792, 1982.
- X. Shang, E. Kaufmann, and M. Valko, “General parallel optimization a without metric,” in Algorithmic Learning Theory. PMLR, 2019, pp. 762–788.
- S. Shalev-Shwartz et al., “Online learning and online convex optimization,” Foundations and Trends® in Machine Learning, vol. 4, no. 2, pp. 107–194, 2012.
- E. Brochu, V. M. Cora, and N. De Freitas, “A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning,” arXiv preprint arXiv:1012.2599, 2010.
- R. Munos, “From bandits to monte-carlo tree search: The optimistic principle applied to optimization and planning,” Foundations and Trends® in Machine Learning, vol. 7, no. 1, pp. 1–129, 2014.
- J. Moody and M. Saffell, “Learning to trade via direct reinforcement,” IEEE Transactions on Neural Networks, vol. 12, no. 4, pp. 875–889, Jul 2001.
- H. R. Berenji and P. Khedkar, “Learning and tuning fuzzy logic controllers through reinforcements,” IEEE Transactions on Neural Networks, vol. 3, no. 5, pp. 724–740, Sep 1992.
- R. Song, F. L. Lewis, and Q. Wei, “Off-policy integral reinforcement learning method to solve nonlinear continuous-time multiplayer nonzero-sum games,” IEEE Transactions on Neural Networks and Learning Systems, vol. PP, no. 99, pp. 1–10, 2016.
- K. Gokcesu and S. S. Kozat, “Online density estimation of nonstationary sources using exponential family of distributions,” IEEE Trans. Neural Networks Learn. Syst., vol. 29, no. 9, pp. 4473–4478, 2018.
- F. M. J. Willems, “Coding for a binary independent piecewise-identically-distributed source.” IEEE Transactions on Information Theory, vol. 42, no. 6, pp. 2210–2217, 1996.
- K. Gokcesu and S. S. Kozat, “Online anomaly detection with minimax optimal density estimation in nonstationary environments,” IEEE Transactions on Signal Processing, vol. 66, no. 5, pp. 1213–1227, 2017.
- G. I. Shamir and N. Merhav, “Low-complexity sequential lossless coding for piecewise-stationary memoryless sources,” IEEE Transactions on Information Theory, vol. 45, no. 5, pp. 1498–1519, Jul 1999.
- K. Gokcesu, M. M. Neyshabouri, H. Gokcesu, and S. S. Kozat, “Sequential outlier detection based on incremental decision trees,” IEEE Trans. Signal Process., vol. 67, no. 4, pp. 993–1005, 2019.
- I. Delibalta, K. Gokcesu, M. Simsek, L. Baruh, and S. S. Kozat, “Online anomaly detection with nested trees,” IEEE Signal Process. Lett., vol. 23, no. 12, pp. 1867–1871, 2016.
- H. Ozkan, M. A. Donmez, S. Tunc, and S. S. Kozat, “A deterministic analysis of an online convex mixture of experts algorithm,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 7, pp. 1575–1580, July 2015.
- A. C. Singer and M. Feder, “Universal linear prediction by model order weighting,” IEEE Transactions on Signal Processing, vol. 47, no. 10, pp. 2685–2699, Oct 1999.
- N. D. Vanli, K. Gokcesu, M. O. Sayin, H. Yildiz, and S. S. Kozat, “Sequential prediction over hierarchical structures,” IEEE Transactions on Signal Processing, vol. 64, no. 23, pp. 6284–6298, Dec 2016.
- S. Bubeck and N. Cesa-Bianchi, “Regret analysis of stochastic and nonstochastic multi-armed bandit problems,” Foundations and Trends in Machine Learning, vol. 5, no. 1, pp. 1–122, 2012.
- K. Gokcesu and S. S. Kozat, “An online minimax optimal algorithm for adversarial multiarmed bandit problem,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 11, pp. 5565–5580, 2018.
- M. M. Neyshabouri, K. Gokcesu, H. Gokcesu, H. Ozkan, and S. S. Kozat, “Asymptotically optimal contextual bandit algorithm using hierarchical structures,” IEEE transactions on neural networks and learning systems, vol. 30, no. 3, pp. 923–937, 2018.
- P. L. Bartlett, V. Gabillon, and M. Valko, “A simple parameter-free and adaptive approach to optimization under a minimal local smoothness assumption,” in Algorithmic Learning Theory. PMLR, 2019, pp. 184–206.
- J.-B. Grill, M. Valko, and R. Munos, “Black-box optimization of noisy functions with unknown smoothness,” Advances in Neural Information Processing Systems, vol. 28, pp. 667–675, 2015.
- S. Piyavskii, “An algorithm for finding the absolute extremum of a function,” USSR Computational Mathematics and Mathematical Physics, vol. 12, no. 4, pp. 57–67, 1972.
- B. O. Shubert, “A sequential method seeking the global maximum of a function,” SIAM Journal on Numerical Analysis, vol. 9, no. 3, pp. 379–388, 1972.
- F. Schoen, “On a sequential search strategy in global optimization problems,” Calcolo, vol. 19, no. 3, pp. 321–334, 1982.
- Z. Shen and Y. Zhu, “An interval version of shubert’s iterative method for the localization of the global maximum,” Computing, vol. 38, no. 3, pp. 275–280, 1987.
- R. Horst and H. Tuy, “On the convergence of global methods in multiextremal optimization,” Journal of Optimization Theory and Applications, vol. 54, no. 2, pp. 253–271, 1987.
- Y. D. Sergeyev, “Global one-dimensional optimization using smooth auxiliary functions,” Mathematical Programming, vol. 81, no. 1, pp. 127–146, 1998.
- P. Hansen and B. Jaumard, “Lipschitz optimization,” in Handbook of global optimization. Springer, 1995, pp. 407–493.
- S. E. Jacobsen and M. Torabi, “A global minimization algorithm for a class of one-dimensional functions,” Journal of Mathematical Analysis and Applications, vol. 62, no. 2, pp. 310–324, 1978.
- D. Q. Mayne and E. Polak, “Outer approximation algorithm for nondifferentiable optimization problems,” Journal of Optimization Theory and Applications, vol. 42, no. 1, pp. 19–30, 1984.
- R. H. Mladineo, “An algorithm for finding the global maximum of a multimodal, multivariate function,” Mathematical Programming, vol. 34, no. 2, pp. 188–200, 1986.
- L. Breiman and A. Cutler, “A deterministic algorithm for global optimization,” Mathematical Programming, vol. 58, no. 1, pp. 179–199, 1993.
- W. Baritompa and A. Cutler, “Accelerations for global optimization covering methods using second derivatives,” Journal of Global Optimization, vol. 4, no. 3, pp. 329–341, 1994.
- R. Ellaia, M. Z. Es-Sadek, and H. Kasbioui, “Modified piyavskii’s global one-dimensional optimization of a differentiable function,” Applied Mathematics, vol. 3, pp. 1306–1320, 2012.
- Y. M. Danilin, “Estimation of the efficiency of an absolute-minimum-finding algorithm,” USSR Computational Mathematics and Mathematical Physics, vol. 11, no. 4, pp. 261–267, 1971.
- C. Malherbe and N. Vayatis, “Global optimization of lipschitz functions,” in International Conference on Machine Learning. PMLR, 2017, pp. 2314–2323.
- C. Bouttier, T. Cesari, and S. Gerchinovitz, “Regret analysis of the piyavskii-shubert algorithm for global lipschitz optimization,” arXiv preprint arXiv:2002.02390, 2020.
- F. Bachoc, T. Cesari, and S. Gerchinovitz, “Instance-dependent bounds for zeroth-order lipschitz optimization with error certificates,” Advances in Neural Information Processing Systems, vol. 34, pp. 24 180–24 192, 2021.