Papers
Topics
Authors
Recent
Search
2000 character limit reached

Scale matrix estimation under data-based loss in high and low dimensions

Published 30 May 2020 in math.ST, stat.AP, and stat.TH | (2006.00243v1)

Abstract: We consider the problem of estimating the scale matrix $\Sigma$ of the additif model $Y_{p\times n} = M + \mathcal{E}$, under a theoretical decision point of view. Here, $ p $ is the number of variables, $ n$ is the number of observations, $ M $ is a matrix of unknown parameters with rank $q<p$ and $ \mathcal {E}$ is a random noise, whose distribution is elliptically symmetric with covariance matrix proportional to $ I_n \otimes \Sigma $\,. We deal with a canonical form of this model where $Y$ is decomposed in two matrices, namely, $Z_{q\times p}$ which summarizes the information contained in $ M $, and $ U_{m\times p}$, where $m=n-q$, which summarizes the sufficient information to estimate $ \Sigma $. As the natural estimators of the form ${\hat {\Sigma}}_a=a\, S$ (where $ S=U^{T}\,U$ and $a$ is a positive constant) perform poorly when $p >m$ (S non-invertible), we propose estimators of the form ${\hat{\Sigma}}{a, G} = a\big( S+ S \, {S{+}\,G(Z,S)}\big)$ where ${S{+}}$ is the Moore-Penrose inverse of $ S$ (which coincides with $S{-1}$ when $S$ is invertible). We provide conditions on the correction matrix $SS{+}{G(Z,S)}$ such that ${\hat {\Sigma}}{a, G}$ improves over ${\hat {\Sigma}}a$ under the data-based loss $L _S( \Sigma, \hat { \Sigma}) ={\rm tr} \big ( S{+}\Sigma\,({\hat{\Sigma}} \, {\Sigma} ^ {- 1} - {I} {p} )^ {2}\big) $. We adopt a unified approach of the two cases where $ S$ is invertible ($p \leq m$) and $ S$ is non-invertible ($p>m$).

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.