Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimal Projected Variance Group-Sparse Block PCA (1705.00461v2)

Published 1 May 2017 in stat.ML

Abstract: We address the problem of defining a group sparse formulation for Principal Components Analysis (PCA) - or its equivalent formulations as Low Rank approximation or Dictionary Learning problems - which achieves a compromise between maximizing the variance explained by the components and promoting sparsity of the loadings. So we propose first a new definition of the variance explained by non necessarily orthogonal components, which is optimal in some aspect and compatible with the principal components situation. Then we use a specific regularization of this variance by the group-$\ell_{1}$ norm to define a Group Sparse Maximum Variance (GSMV) formulation of PCA. The GSMV formulation achieves our objective by construction, and has the nice property that the inner non smooth optimization problem can be solved analytically, thus reducing GSMV to the maximization of a smooth and convex function under unit norm and orthogonality constraints, which generalizes Journee et al. (2010) to group sparsity. Numerical comparison with deflation on synthetic data shows that GSMV produces steadily slightly better and more robust results for the retrieval of hidden sparse structures, and is about three times faster on these examples. Application to real data shows the interest of group sparsity for variables selection in PCA of mixed data (categorical/numerical) .

Citations (2)

Summary

We haven't generated a summary for this paper yet.