A kernel method for the learning of Wasserstein geometric flows (2511.06655v1)
Abstract: Wasserstein gradient and Hamiltonian flows have emerged as essential tools for modeling complex dynamics in the natural sciences, with applications ranging from partial differential equations (PDEs) and optimal transport to quantum mechanics and information geometry. Despite their significance, the inverse identification of potential functions and interaction kernels underlying these flows remains relatively unexplored. In this work, we tackle this challenge by addressing the inverse problem of simultaneously recovering the potential function and interaction kernel from discretized observations of the density flow. We formulate the problem as an optimization task that minimizes a loss function specifically designed to enforce the underlying variational structure of Wasserstein flows, ensuring consistency with the geometric properties of the density manifold. Our framework employs a kernel-based operator approach using the associated Reproducing Kernel Hilbert Space (RKHS), which provides a closed-form representation of the unknown components. Furthermore, a comprehensive error analysis is conducted, providing convergence rates under adaptive regularization parameters as the temporal and spatial discretization mesh sizes tend to zero. Finally, a stability analysis is presented to bridge the gap between discrete trajectory data and continuous-time flow dynamics for the Wasserstein Hamiltonian flow.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.