Subspace Quasi-Newton Method with Gradient Approximation (2406.01965v1)
Abstract: In recent years, various subspace algorithms have been developed to handle large-scale optimization problems. Although existing subspace Newton methods require fewer iterations to converge in practice, the matrix operations and full gradient computation are bottlenecks when dealing with large-scale problems. %In this study, We propose a subspace quasi-Newton method that is restricted to a deterministic-subspace together with a gradient approximation based on random matrix theory. Our method does not require full gradients, let alone Hessian matrices. Yet, it achieves the same order of the worst-case iteration complexities in average for convex and nonconvex cases, compared to existing subspace methods. In numerical experiments, we confirm the superiority of our algorithm in terms of computation time.