Gibbs conditioning principle for log-concave independent random variables (2512.24910v1)
Abstract: Let $ν_1,ν_2,\dots$ be a sequence of probabilities on the nonnegative integers, and $X=(X_1,X_2, \dots)$ be a sequence of independent random variables $X_i$ with law $ν_i$. For $λ>0$ denote $Zλ_i:= \sum_x λxν_i(x)$ and $λ{\max}:= \sup{λ>0: Zλ_i<\infty \text{ for all }i}$, and assume $λ{\max}>1$. For $λ<λ{\max}$, define the tilted probability $ν_iλ(x):= λxν_i(x)/Zλ_i$, and let $Xλ$ be a sequence of independent variables $Xλ_i$ with law $νλ_i$, and denote $Sλ_n:= Xλ_1+\dots+Xλ_n$, with $S_n=S1_n$. Choose $λ*\in(1,λ{\max})$ and denote $R*_n:= E (S{λ*}_n)$. The Gibbs Conditioning Principle (GCP) holds if $P(X\in\cdot|S_n>R*_n)$ converges weakly to the law of $X{λ*}$, as $n\to\infty$. We prove the GCP for log-concave $ν_i$'s, meaning $ν_i(x+1)\,ν_i(x-1) \le ( ν_i(x))2$, subject to a technical condition that prevents condensation. The canonical measures are the distributions of the first $n$ variables, conditioned on their sum being $k$. Efron's theorem states that for log-concave $ν_i$'s, the canonical measures are stochastically ordered with respect to $k$. This, in turn, leads to the ordering of the conditioned tilted measures $P(Xλ\in\cdot|Sλ_n>R*_n)$ in terms of $λ$. This ordering is a fundamental component of our proof.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.