Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Finite-time and Fixed-time Convergence in Continuous-time Optimization (2109.15064v1)

Published 30 Sep 2021 in math.OC, cs.SY, and eess.SY

Abstract: It is known that the gradient method can be viewed as a dynamic system where various iterative schemes can be designed as a part of the closed loop system with desirable properties. In this paper, the finite-time and fixed-time convergence in continuous-time optimization are mainly considered. By the advantage of sliding mode control, a finite-time gradient method is proposed, whose convergence time is dependent on initial conditions. To make the convergence time robust to initial conditions, two different designs of fixed-time gradient methods are then provided. One is designed using the property of sine function, whose convergence time is dependent on the frequency of a sine function. The other one is designed using the property of Mittag-Leffler function, whose convergence time is determined by the first positive zero of a Mittag-Leffler function. All the results are extended to more general cases and finally demonstrated by some dedicated simulation examples.

Citations (4)

Summary

We haven't generated a summary for this paper yet.