A Finite-Difference Trust-Region Method for Convexly Constrained Smooth Optimization (2510.17366v1)
Abstract: We propose a derivative-free trust-region method based on finite-difference gradient approximations for smooth optimization problems with convex constraints. The proposed method does not require computing an approximate stationarity measure. For nonconvex problems, we establish a worst-case complexity bound of $\mathcal{O}!\left(n\left(\tfrac{L}{\sigma}\epsilon\right){-2}\right)$ function evaluations for the method to reach an $\left(\tfrac{L}{\sigma}\epsilon\right)$-approximate stationary point, where $n$ is the number of variables, $L$ is the Lipschitz constant of the gradient, and $\sigma$ is a user-defined estimate of $L$. If the objective function is convex, the complexity to reduce the functional residual below $(L/\sigma)\epsilon$ is shown to be of $\mathcal{O}!\left(n\left(\tfrac{L}{\sigma}\epsilon\right){-1}\right)$ function evaluations, while for Polyak-Lojasiewicz functions on unconstrained domains, the bound further improves to $\mathcal{O}\left(n\log\left(\left(\frac{L}{\sigma}\epsilon\right){-1}\right)\right)$. Numerical experiments on benchmark problems and a model-fitting application demonstrate the method's efficiency relative to state-of-the-art derivative-free solvers for both unconstrained and bound-constrained problems.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.