Bias-variance Tradeoff in Tensor Estimation (2509.17382v1)
Abstract: We study denoising of a third-order tensor when the ground-truth tensor is not necessarily Tucker low-rank. Specifically, we observe $$ Y=X\ast+Z\in \mathbb{R}{p_{1} \times p_{2} \times p_{3}}, $$ where $X\ast$ is the ground-truth tensor, and $Z$ is the noise tensor. We propose a simple variant of the higher-order tensor SVD estimator $\widetilde{X}$. We show that uniformly over all user-specified Tucker ranks $(r_{1},r_{2},r_{3})$, $$ | \widetilde{X} - X* |{ \mathrm{F}}2 = O \Big( \kappa2 \Big{ r{1}r_{2}r_{3}+\sum_{k=1}{3} p_{k} r_{k} \Big} \; + \; \xi_{(r_{1},r_{2},r_{3})}2\Big) \quad \text{ with high probability.} $$ Here, the bias term $\xi_{(r_1,r_2,r_3)}$ corresponds to the best achievable approximation error of $X\ast$ over the class of tensors with Tucker ranks $(r_1,r_2,r_3)$; $\kappa2$ quantifies the noise level; and the variance term $\kappa2 {r_{1}r_{2}r_{3}+\sum_{k=1}{3} p_{k} r_{k}}$ scales with the effective number of free parameters in the estimator $\widetilde{X}$. Our analysis achieves a clean rank-adaptive bias--variance tradeoff: as we increase the ranks of estimator $\widetilde{X}$, the bias $\xi(r_{1},r_{2},r_{3})$ decreases and the variance increases. As a byproduct we also obtain a convenient bias-variance decomposition for the vanilla low-rank SVD matrix estimators.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.