Scalable Derivative-Free Optimization Algorithms with Low-Dimensional Subspace Techniques
Abstract: We re-introduce a derivative-free subspace optimization framework originating from Chapter 5 of the Ph.D. thesis [Z. Zhang, On Derivative-Free Optimization Methods, Ph.D. thesis, Chinese Academy of Sciences, Beijing, 2012] of the author under the supervision of Ya-xiang Yuan. At each iteration, the framework defines a (low-dimensional) subspace based on an approximate gradient, and then solves a subproblem in this subspace to generate a new iterate. We sketch the global convergence and worst-case complexity analysis of the framework, elaborate on its implementation, and present some numerical results on solving problems with dimensions as high as 104 using only inaccurate function values.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.