Papers
Topics
Authors
Recent
Search
2000 character limit reached

Riemannian optimization with finite-difference gradient approximations

Published 13 Jan 2026 in math.OC | (2601.08751v1)

Abstract: Derivative-free Riemannian optimization (DFRO) aims to minimize an objective function using only function evaluations, under the constraint that the decision variables lie on a Riemannian manifold. The rapid increase in problem dimensions over the years calls for computationally cheap DFRO algorithms, that is, algorithms requiring as few function evaluations and retractions as possible. We propose a novel DFRO method based on finite-difference gradient approximations that relies on an adaptive selection of the finite-difference accuracy and stepsize that is novel even in the Euclidean setting. When endowed with an intrinsic finite-difference scheme, that measures variations of the objective in tangent directions using retractions, our proposed method requires $O(dε{-2})$ function evaluations and retractions to find an $ε$-critical point, where $d$ is the manifold dimension. We then propose a variant of our method when the search space is a Riemannian submanifold of an $n$-dimensional Euclidean space. This variant relies on an extrinsic finite-difference scheme, approximating the Riemannian gradient directly in the embedding space, assuming that the objective function can be evaluated outside of the manifold. This approach leads to worst-case complexity bounds of $O(dε{-2})$ function evaluations and $O(ε{-2})$ retractions. We also present numerical results showing that the proposed methods achieve superior performance over existing derivative-free methods on various problems in both Euclidean and Riemannian settings.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.