Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
140 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A gradient flow on control space with rough initial condition (2407.11817v1)

Published 16 Jul 2024 in math.OC and math.PR

Abstract: We consider the (sub-Riemannian type) control problem of finding a path going from an initial point $x$ to a target point $y$, by only moving in certain admissible directions. We assume that the corresponding vector fields satisfy the bracket-generating (H\"ormander) condition, so that the classical Chow-Rashevskii theorem guarantees the existence of such a path. One natural way to try to solve this problem is via a gradient flow on control space. However, since the corresponding dynamics may have saddle points, any convergence result must rely on suitable (e.g. random) initialisation. We consider the case when this initialisation is irregular, which is conveniently formulated via Lyons' rough path theory. We show that one advantage of this initialisation is that the saddle points are moved to infinity, while minima remain at a finite distance from the starting point. In the step $2$-nilpotent case, we further manage to prove that the gradient flow converges to a solution, if the initial condition is the path of a Brownian motion (or rougher). The proof is based on combining ideas from Malliavin calculus with {\L}ojasiewicz inequalities. A possible motivation for our study comes from the training of deep Residual Neural Nets, in the regime when the number of trainable parameters per layer is smaller than the dimension of the data vector.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com