Gradient descent in a generalised Bregman distance framework (1612.02506v2)
Abstract: We discuss a special form of gradient descent that in the literature has become known as the so-called linearised Bregman iteration. The idea is to replace the classical (squared) two norm metric in the gradient descent setting with a generalised Bregman distance, based on a more general proper, convex and lower semi-continuous functional. Gradient descent as well as the entropic mirror descent by Nemirovsky and Yudin are special cases, as is a specific form of non-linear Landweber iteration introduced by Bachmayr and Burger. We are going to analyse the linearised Bregman iteration in a setting where the functional we want to minimise is neither necessarily Lipschitz-continuous (in the classical sense) nor necessarily convex, and establish a global convergence result under the additional assumption that the functional we wish to minimise satisfies the so-called Kurdyka-{\L}ojasiewicz property.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.