An unexpected encounter with Cauchy and Lévy (1505.01957v2)
Abstract: The Cauchy distribution is usually presented as a mathematical curiosity, an exception to the Law of Large Numbers, or even as an "Evil" distribution in some introductory courses. It therefore surprised us when Drton and Xiao (2016) proved the following result for $m=2$ and conjectured it for $m\ge 3$. Let $X= (X_1,..., X_m)$ and $Y = (Y_1, ...,Y_m)$ be i.i.d $N(0,\Sigma)$, where $\Sigma={\sigma_{ij}}\ge 0$ is an $m\times m$ and \textit{arbitrary} covariance matrix with $\sigma_{jj}>0$ for all $1\leq j\leq m$. Then $$Z = \sum_{j=1}m w_j \frac{X_j}{Y_j} \ \sim \mathrm{Cauchy}(0,1),$$ as long as $w=(w_1,..., w_m) $ is independent of $(X, Y)$, $w_j\ge 0, j=1,..., m$, and $\sum_{j=1}m w_j=1$. In this note, we present an elementary proof of this conjecture for any $m \geq 2$ by linking $Z$ to a geometric characterization of Cauchy(0,1) given in Willams (1969). This general result is essential to the large sample behavior of Wald tests in many applications such as factor models and contingency tables. It also leads to other unexpected results such as $$ \sum_{i=1}m\sum_{j=1}m \frac{w_iw_j\sigma_{ij}}{X_iX_j} \sim {\text{L\'{e}vy}}(0, 1). $$ This generalizes the "super Cauchy phenomenon" that the average of $m$ i.i.d. standard L\'evy variables (i.e., inverse chi-squared variables with one degree of freedom) has the same distribution as that of a single standard L\'evy variable multiplied by $m$ (which is obtained by taking $w_j=1/m$ and $\Sigma$ to be the identity matrix).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.