Papers
Topics
Authors
Recent
2000 character limit reached

Trajectorial dissipation and gradient flow for the relative entropy in Markov chains

Published 28 May 2020 in math.PR | (2005.14177v3)

Abstract: We study the temporal dissipation of variance and relative entropy for ergodic Markov Chains in continuous time, and compute explicitly the corresponding dissipation rates. These are identified, as is well known, in the case of the variance in terms of an appropriate Hilbertian norm; and in the case of the relative entropy, in terms of a Dirichlet form which morphs into a version of the familiar Fisher information under conditions of detailed balance. Here we obtain trajectorial versions of these results, valid along almost every path of the random motion and most transparent in the backwards direction of time. Martingale arguments and time reversal play crucial roles, as in the recent work of Karatzas, Schachermayer and Tschiderer for conservative diffusions. Extension are developed to general "convex divergences" and to countable state-spaces. The steepest descent and gradient flow properties for the variance, the relative entropy, and appropriate generalizations, are studied along with their respective geometries under conditions of detailed balance, leading to a very direct proof for the HWI inequality of Otto and Villani in the present context.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.