Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Central Limit Theorem and Near classical Berry-Esseen rate for self normalized sums in high dimensions (2012.03758v2)

Published 7 Dec 2020 in math.PR, math.ST, and stat.TH

Abstract: In this article, we are interested in the high dimensional normal approximation of $T_n =\Big(\sum_{i=1}{n}X_{i1}/\Big(\sqrt{\sum_{i=1}{n}X_{i1}2}\Big),\dots,$ $\sum_{i=1}{n}X_{ip}/\Big(\sqrt{\sum_{i=1}{n}X_{ip}2}\Big)\Big)$ in $\mathcal{R}p$ uniformly over the class of hyper-rectangles $\mathcal{A}{re}={\prod_{j=1}{p}[a_j,b_j]\cap\mathcal{R}:-\infty\leq a_j\leq b_j \leq \infty, j=1,\ldots,p}$, where $X_1,\dots,X_n$ are non-degenerate independent $p-$dimensional random vectors. We assume that the components of $X_i$ are independent and identically distributed (iid) and investigate the optimal cut-off rate of $\log p$ in the uniform central limit theorem (UCLT) for $T_n$ over $\mathcal{A}{re}$. The aim is to reduce the exponential moment conditions, generally assumed for exponential growth of the dimension with respect to the sample size in high dimensional CLT, to some polynomial moment conditions. Indeed, we establish that only the existence of some polynomial moment of order $\in [2,4]$ is sufficient for exponential growth of $p$. However the rate of growth of $\log p$ can not further be improved from $o\big(n{1/2}\big)$ as a power of $n$ even if $X_{ij}$'s are iid across $(i,j)$ and $X_{11}$ is bounded. We also establish near$-n{-\kappa/2}$ Berry-Esseen rate for $T_n$ in high dimension under the existence of $(2+\kappa)$th absolute moments of $X_{ij}$ for $0< \kappa \leq 1$. When $\kappa =1$, the obtained Berry-Esseen rate is also shown to be optimal. As an application, we find respective versions for component-wise Student's t-statistic, which may be useful in high dimensional statistical inference.

Summary

We haven't generated a summary for this paper yet.