Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 67 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 121 tok/s Pro
GPT OSS 120B 440 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Limit theorems for stationary Markov processes with L2-spectral gap (1201.4579v1)

Published 22 Jan 2012 in math.PR, math.ST, and stat.TH

Abstract: Let $(X_t, Y_t){t\in T}$ be a discrete or continuous-time Markov process with state space $X \times Rd$ where $X$ is an arbitrary measurable set. Its transition semigroup is assumed to be additive with respect to the second component, i.e. $(X_t, Y_t){t\in T}$ is assumed to be a Markov additive process. In particular, this implies that the first component $(X_t){t\in T}$ is also a Markov process. Markov random walks or additive functionals of a Markov process are special instances of Markov additive processes. In this paper, the process $(Y_t){t\in T}$ is shown to satisfy the following classical limit theorems: (a) the central limit theorem, (b) the local limit theorem, (c) the one-dimensional Berry-Esseen theorem, (d) the one-dimensional first-order Edgeworth expansion, provided that we have sup{t\in(0,1]\cap T : E{\pi,0}[|Y_t| {\alpha}] < 1 with the expected order with respect to the independent case (up to some $\varepsilon > 0$ for (c) and (d)). For the statements (b) and (d), a Markov nonlattice condition is also assumed as in the independent case. All the results are derived under the assumption that the Markov process $(X_t){t\in T}$ has an invariant probability distribution $\pi$, is stationary and has the $L2(\pi)$-spectral gap property (that is, $(X_t)t\in N}$ is $\rho$-mixing in the discrete-time case). The case where $(X_t){t\in T}$ is non-stationary is briefly discussed. As an application, we derive a Berry-Esseen bound for the M-estimators associated with $\rho$-mixing Markov chains.

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube