Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sequential Inverse Approximation of a Regularized Sample Covariance Matrix (1707.08885v1)

Published 27 Jul 2017 in stat.CO

Abstract: One of the goals in scaling sequential machine learning methods pertains to dealing with high-dimensional data spaces. A key related challenge is that many methods heavily depend on obtaining the inverse covariance matrix of the data. It is well known that covariance matrix estimation is problematic when the number of observations is relatively small compared to the number of variables. A common way to tackle this problem is through the use of a shrinkage estimator that offers a compromise between the sample covariance matrix and a well-conditioned matrix, with the aim of minimizing the mean-squared error. We derived sequential update rules to approximate the inverse shrinkage estimator of the covariance matrix. The approach paves the way for improved large-scale machine learning methods that involve sequential updates.

Summary

We haven't generated a summary for this paper yet.