Finite-time and Fixed-time Convergence in Continuous-time Optimization
Abstract: It is known that the gradient method can be viewed as a dynamic system where various iterative schemes can be designed as a part of the closed loop system with desirable properties. In this paper, the finite-time and fixed-time convergence in continuous-time optimization are mainly considered. By the advantage of sliding mode control, a finite-time gradient method is proposed, whose convergence time is dependent on initial conditions. To make the convergence time robust to initial conditions, two different designs of fixed-time gradient methods are then provided. One is designed using the property of sine function, whose convergence time is dependent on the frequency of a sine function. The other one is designed using the property of Mittag-Leffler function, whose convergence time is determined by the first positive zero of a Mittag-Leffler function. All the results are extended to more general cases and finally demonstrated by some dedicated simulation examples.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.