Information Geometry of Exponentiated Gradient: Convergence beyond L-Smoothness (2504.05136v1)
Abstract: We study the minimization of smooth, possibly nonconvex functions over the positive orthant, a key setting in Poisson inverse problems, using the exponentiated gradient (EG) method. Interpreting EG as Riemannian gradient descent (RGD) with the $e$-Exp map from information geometry as a retraction, we prove global convergence under weak assumptions -- without the need for $L$-smoothness -- and finite termination of Riemannian Armijo line search. Numerical experiments, including an accelerated variant, highlight EG's practical advantages, such as faster convergence compared to RGD based on interior-point geometry.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.