Large Deviations for Empirical Measures of Self-Interacting Markov Chains (2304.01384v2)
Abstract: Let $\Deltao$ be a finite set and, for each probability measure $m$ on $\Deltao$, let $G(m)$ be a transition probability kernel on $\Deltao$. Fix $x_0 \in \Deltao$ and consider the chain ${X_n, \; n \in \mathbb{N}0}$ of $\Deltao$-valued random variables such that $X_0=x$, and given $X_0, \ldots , X_n$, the conditional distribution of $X{n+1}$ is $G(L{n+1})(X_n, \cdot)$, where $L{n+1} = \frac{1}{n+1} \sum_{i=0}{n} \delta_{X_i}$ is the empirical measure at instant $n$. Under conditions on $G$ we establish a large deviation principle for the empirical measure sequence ${Ln, \; n \in \mathbb{N}}$. As one application of this result we obtain large deviation asymptotics for the Aldous-Flannery-Palacios (1988) approximation scheme for quasistationary distributions of irreducible finite state Markov chains. The conditions on $G$ cover various other models of reinforced stochastic evolutions as well, including certain vertex reinforced and edge reinforced random walks and a variant of the PageRank algorithm. The particular case where $G(m)$ does not depend on $m$ corresponds to the classical results of Donsker and Varadhan (1975) on large deviations of empirical measures of Markov processes. However, unlike this classical setting, for the general self-interacting models considered here, the rate function takes a very different form; it is typically non-convex and is given through a dynamical variational formula with an infinite horizon discounted objective function.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.