Gradient flow for mutual information and algorithmic implementation
Develop a gradient-flow formulation for mutual information itself on the space of probability measures and ascertain whether such a flow can be implemented algorithmically to minimize mutual information.
References
While we study the convergence of mutual information along the Langevin diffusion and ULA, many other interesting questions remain open. Finally, just as the Fokker-Planck equation can be viewed as the Wasserstein gradient flow for relative entropy, if we intend to minimize mutual information, then it would be interesting to study the gradient flow for mutual information itself, and whether we can implement it algorithmically.
                — Characterizing Dependence of Samples along the Langevin Dynamics and Algorithms via Contraction of $Φ$-Mutual Information
                
                (2402.17067 - Liang et al., 26 Feb 2024) in Discussion (Section: Discussion)