Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimal Shrinkage of Singular Values (1405.7511v3)

Published 29 May 2014 in math.ST and stat.TH

Abstract: We consider recovery of low-rank matrices from noisy data by shrinkage of singular values, in which a single, univariate nonlinearity is applied to each of the empirical singular values. We adopt an asymptotic framework, in which the matrix size is much larger than the rank of the signal matrix to be recovered, and the signal-to-noise ratio of the low-rank piece stays constant. For a variety of loss functions, including Mean Square Error (MSE - square Frobenius norm), the nuclear norm loss and the operator norm loss, we show that in this framework there is a well-defined asymptotic loss that we evaluate precisely in each case. In fact, each of the loss functions we study admits a unique admissible shrinkage nonlinearity dominating all other nonlinearities. We provide a general method for evaluating these optimal nonlinearities, and demonstrate our framework by working out simple, explicit formulas for the optimal nonlinearities in the Frobenius, nuclear and operator norm cases. For example, for a square low-rank n-by-n matrix observed in white noise with level $\sigma$, the optimal nonlinearity for MSE loss simply shrinks each data singular value $y$ to $\sqrt{y2-4n\sigma2 }$ (or to 0 if $y<2\sqrt{n}\sigma$). This optimal nonlinearity guarantees an asymptotic MSE of $2nr\sigma2$, which compares favorably with optimally tuned hard thresholding and optimally tuned soft thresholding, providing guarantees of $3nr\sigma2$ and $6nr\sigma2$, respectively. Our general method also allows one to evaluate optimal shrinkers numerically to arbitrary precision. As an example, we compute optimal shrinkers for the Schatten-p norm loss, for any p>0.

Summary

We haven't generated a summary for this paper yet.