Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Methods of Nonconvex Optimization (2406.10406v1)

Published 14 Jun 2024 in math.OC

Abstract: This book is devoted to finite-dimensional problems of non-convex non-smooth optimization and numerical methods for their solution. The problem of nonconvexity is studied in the book on two main models of nonconvex dependencies: these are the so-called generalized differentiable functions and locally Lipschitz functions. Non-smooth functions naturally arise in various applications. In addition, they often appear in the theory of extremal problems itself due to the operations of taking the maximum and minimum, decomposition techniques, exact non-smooth penalties, and duality. The considered models of nonconvexity are quite general and cover the majority of practically important optimization problems; they clearly show all the difficulties of non-convex optimization. The method of studying the generalized differentiable functions is that for these functions a generalization of the concept of gradient is introduced, a calculus is constructed, and various properties of nonconvex problems are studied in terms of generalized gradients. As for numerical methods, it is possible to extend the theory and algorithms of subgradient descent of convex optimization to problems with generalized differentiable functions. Methods for solving Lipschitz problems are characterized by the fact that the original functions are approximated by smoothed ones and iterative minimization procedures are applied to them. With this approach, it is possible to approximate the gradients of smoothed functions by stochastic finite differences and thus to construct methods without calculating gradients. A similar approach can be justified in generalized differentiable and Lipschitz stochastic programming. In these cases, various generalizations of the classical stochastic approximation and stochastic quasi-gradient method are obtained for solving constrained nonconvex nonsmooth stochastic programming problems.

Citations (5)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets