Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 58 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 17 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 179 tok/s Pro
GPT OSS 120B 463 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Large factor model estimation by nuclear norm plus $l_1$ norm penalization (2104.02422v1)

Published 6 Apr 2021 in math.ST, stat.ME, and stat.TH

Abstract: This paper provides a comprehensive estimation framework via nuclear norm plus $l_1$ norm penalization for high-dimensional approximate factor models with a sparse residual covariance. The underlying assumptions allow for non-pervasive latent eigenvalues and a prominent residual covariance pattern. In that context, existing approaches based on principal components may lead to misestimate the latent rank, due to the numerical instability of sample eigenvalues. On the contrary, the proposed optimization problem retrieves the latent covariance structure and exactly recovers the latent rank and the residual sparsity pattern. Conditioning on them, the asymptotic rates of the subsequent ordinary least squares estimates of loadings and factor scores are provided, the recovered latent eigenvalues are shown to be maximally concentrated and the estimates of factor scores via Bartlett's and Thompson's methods are proved to be the most precise given the data. The validity of outlined results is highlighted in an exhaustive simulation study and in a real financial data example.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.