Papers
Topics
Authors
Recent
2000 character limit reached

A Finite-Difference Trust-Region Method for Convexly Constrained Smooth Optimization (2510.17366v1)

Published 20 Oct 2025 in math.OC

Abstract: We propose a derivative-free trust-region method based on finite-difference gradient approximations for smooth optimization problems with convex constraints. The proposed method does not require computing an approximate stationarity measure. For nonconvex problems, we establish a worst-case complexity bound of $\mathcal{O}!\left(n\left(\tfrac{L}{\sigma}\epsilon\right){-2}\right)$ function evaluations for the method to reach an $\left(\tfrac{L}{\sigma}\epsilon\right)$-approximate stationary point, where $n$ is the number of variables, $L$ is the Lipschitz constant of the gradient, and $\sigma$ is a user-defined estimate of $L$. If the objective function is convex, the complexity to reduce the functional residual below $(L/\sigma)\epsilon$ is shown to be of $\mathcal{O}!\left(n\left(\tfrac{L}{\sigma}\epsilon\right){-1}\right)$ function evaluations, while for Polyak-Lojasiewicz functions on unconstrained domains, the bound further improves to $\mathcal{O}\left(n\log\left(\left(\frac{L}{\sigma}\epsilon\right){-1}\right)\right)$. Numerical experiments on benchmark problems and a model-fitting application demonstrate the method's efficiency relative to state-of-the-art derivative-free solvers for both unconstrained and bound-constrained problems.

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.