Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Convergence of Conditional Entropy for Long Range Dependent Markov Chains (2110.14881v1)

Published 28 Oct 2021 in math.PR, cs.IT, and math.IT

Abstract: In this paper we consider the convergence of the conditional entropy to the entropy rate for Markov chains. Convergence of certain statistics of long range dependent processes, such as the sample mean, is slow. It has been shown in Carpio and Daley \cite{carpio2007long} that the convergence of the $n$-step transition probabilities to the stationary distribution is slow, without quantifying the convergence rate. We prove that the slow convergence also applies to convergence to an information-theoretic measure, the entropy rate, by showing that the convergence rate is equivalent to the convergence rate of the $n$-step transition probabilities to the stationary distribution, which is equivalent to the Markov chain mixing time problem. Then we quantify this convergence rate, and show that it is $O(n{2H-2})$, where $n$ is the number of steps of the Markov chain and $H$ is the Hurst parameter. Finally, we show that due to this slow convergence, the mutual information between past and future is infinite if and only if the Markov chain is long range dependent. This is a discrete analogue of characterisations which have been shown for other long range dependent processes.

Summary

We haven't generated a summary for this paper yet.