Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
113 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
37 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Controlled Optimization with a Prescribed Finite-Time Convergence Using a Time Varying Feedback Gradient Flow (2503.13910v1)

Published 18 Mar 2025 in math.OC

Abstract: From the perspective of control theory, the gradient descent optimization methods can be regarded as a dynamic system where various control techniques can be designed to enhance the performance of the optimization method. In this paper, we propose a prescribed finite-time convergent gradient flow that uses time-varying gain nonlinear feedback that can drive the states smoothly towards the minimum. This idea is different from the traditional finite-time convergence algorithms that relies on fractional-power or signed gradient as a nonlinear feedback, that is proved to have finite/fixed time convergence satisfying strongly convex or the Polyak-{\L}ojasiewicz (P{\L}) inequality, where due to its nature, the proposed approach was shown to achieve this property for both strongly convex function, and for those satisfies Polyak-{\L}ojasiewic inequality. Our method is proved to converge in a prescribed finite time via Lyapunov theory. Numerical experiments were presented to illustrate our results.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.