Fixed-Time Stable Gradient Flows: Applications to Continuous-Time Optimization (1808.10474v5)
Abstract: This paper proposes novel gradient-flow schemes that yield convergence to the optimal point of a convex optimization problem within a \textit{fixed} time from any given initial condition for unconstrained optimization, constrained optimization, and min-max problems. The application of the modified gradient flow to unconstrained optimization problems is studied under the assumption of gradient-dominance. Then, a modified Newton's method is presented that exhibits fixed-time convergence under some mild conditions on the objective function. Building upon this method, a novel technique for solving convex optimization problems with linear equality constraints that yields convergence to the optimal point in fixed time is developed. More specifically, constrained optimization problems formulated as min-max problems are considered, and a novel method for computing the optimal solution in fixed-time is proposed using the Lagrangian dual. Finally, the general min-max problem is considered, and a modified scheme to obtain the optimal solution of saddle-point dynamics in fixed time is developed. Numerical illustrations that compare the performance of the proposed method against Newton's method, rescaled-gradient method, and Nesterov's accelerated method are included to corroborate the efficacy and applicability of the modified gradient flows in constrained and unconstrained optimization problems.