Papers
Topics
Authors
Recent
2000 character limit reached

Iteration Complexity and Finite-Time Efficiency of Adaptive Sampling Trust-Region Methods for Stochastic Derivative-Free Optimization (2305.10650v3)

Published 18 May 2023 in math.OC and math.PR

Abstract: Adaptive sampling with interpolation-based trust regions or ASTRO-DF is a successful algorithm for stochastic derivative-free optimization with an easy-to-understand-and-implement concept that guarantees almost sure convergence to a first-order critical point. To reduce its dependence on the problem dimension, we present local models with diagonal Hessians constructed on interpolation points based on a coordinate basis. We also leverage the interpolation points in a direct search manner whenever possible to boost ASTRO-DF's performance in a finite time. We prove that the algorithm has a canonical iteration complexity of $\mathcal{O}(\epsilon{-2})$ almost surely, which is the first guarantee of its kind without placing assumptions on the quality of function estimates or model quality or independence between them. Numerical experimentation reveals the computational advantage of ASTRO-DF with coordinate direct search due to saving and better steps in the early iterations of the search.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. NOMAD version 4: Nonlinear optimization with the MADS algorithm. arXiv preprint arXiv:2104.11627.
  2. A theoretical and empirical comparison of gradient approximations in derivative-free optimization. Foundations of Computational Mathematics, 22(2):507–560.
  3. Global convergence rate analysis of a generic line search algorithm with noise. SIAM Journal on Optimization, 31(2):1489–1518.
  4. Convergence rate analysis of a stochastic trust-region method via supermartingales. INFORMS Journal on Optimization, 1(2):92–119.
  5. First- and second-order high probability complexity bounds for trust-region methods with noisy oracles. arXiv:2205.03667.
  6. Scalable subspace methods for derivative-free nonlinear least-squares optimization. Mathematical Programming, 199:461–524.
  7. Stochastic trust-region response-surface method (STRONG)—a new response-surface framework for simulation optimization. INFORMS Journal on Computing, 25(2):230–243.
  8. Stochastic optimization using a trust-region method and random models. Mathematical Programming, 169(2):447–487.
  9. Optimizing simulations with noise-tolerant structured exploration. In 2018 IEEE International Conference on Robotics and Automation, pages 2970–2977. IEEE.
  10. Introduction to derivative-free optimization. Society for Industrial and Applied Mathematics, 1st edition.
  11. Gradient and diagonal hessian approximations using quadratic interpolation models and aligned regular bases. Numerical Algorithms, 88:767–791.
  12. Stochastic trust-region algorithm in random subspaces with convergence and expected complexity analyses. arXiv.2207.06452.
  13. Diagnostic tools for evaluating and comparing simulation-optimization algorithms. INFORMS Journal on Computing, 35(2):350–367.
  14. SimOpt. https://github.com/simopt-admin/simopt.
  15. Global convergence of policy gradient methods for the linear quadratic regulator. In Proceedings of the 35th International Conference on Machine Learning, pages 1467–1476. PMLR.
  16. Online convex optimization in the bandit setting: gradient descent without a gradient. arXiv:cs/0408007.
  17. Black-box optimization in machine learning with trust region based derivative free algorithm. arXiv.1703.06925.
  18. Complexity and global rates of trust-region methods based on probabilistic models. IMA Journal of Numerical Analysis, 38(3):1579–1597.
  19. Consistency and complexity of adaptive sampling based trust-region optimization. Under Preparation.
  20. Improved complexity of trust-region optimization for zeroth-order stochastic oracles with adaptive sampling. In Kim, S., Feng, B., Smith, K., Masoud, S., Zheng, Z., Szabo, C., and Loper, M., editors, Proceedings of the 2021 Winter Simulation Conference, pages 1–12. Institute of Electrical and Electronics Engineers, Inc.
  21. High probability complexity bounds for line search based on stochastic oracles. In Advances in Neural Information Processing Systems, volume 34, pages 9193–9203. Curran Associates, Inc.
  22. Convergence properties of direct search methods for stochastic optimization. In Johansson, B., Jain, S., Hugan, J. M.-T. J., and Yücesan, E., editors, Proceedings of the 2010 Winter Simulation Conference, pages 1003–1011, Piscataway, New Jersey. Institute of Electrical and Electronics Engineers, Inc.
  23. Adam: A method for stochastic optimization. arXiv:1412.6980.
  24. An object-oriented random number package with many long streams and substreams. Operations Research, 50(6):1073–1075.
  25. A derivative-free trust-region algorithm for the optimization of functions smoothed via Gaussian convolution using adaptive multiple importance sampling. SIAM Journal on Optimization, 28(2):1478–1507.
  26. Latency considerations for stochastic optimizers in variational quantum algorithms. Quantum, 7:949.
  27. A stochastic line search method with expected complexity analysis. SIAM Journal on Optimization, 30(1):349–376.
  28. An optimal interpolation set for model-based derivative-free optimization methods. arXiv:2302.09992.
  29. Learning to learn by zeroth-order oracle. arXiv:1910.09464.
  30. Evolution strategies as a scalable alternative to reinforcement learning. arXiv:1703.03864.
  31. ASTRO-DF: Adaptive sampling trust-region optimization algorithms, heuristics, and numerical experience. In Roeder, T. M. K., Frazier, P. I., Szechtman, R., Zhou, E., Huschka, T., and Chick, S. E., editors, Proceedings of the 2016 Winter Simulation Conference, pages 554–565, Piscataway, NJ. Institute of Electrical and Electronics Engineers, Inc.
  32. ASTRO-DF: A class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization. SIAM Journal on Optimization, 28(4):3145–3176.
  33. ORBIT: Optimization by radial basis function interpolation in trust-regions. SIAM Journal on Scientific Computing, 30(6):3197–3219.
Citations (5)

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.