Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Sharp Blockwise Tensor Perturbation Bound for Orthogonal Iteration (2008.02437v2)

Published 6 Aug 2020 in math.ST, cs.LG, cs.NA, math.NA, stat.ML, and stat.TH

Abstract: In this paper, we develop novel perturbation bounds for the high-order orthogonal iteration (HOOI) [DLDMV00b]. Under mild regularity conditions, we establish blockwise tensor perturbation bounds for HOOI with guarantees for both tensor reconstruction in Hilbert-Schmidt norm $|\widehat{\bcT} - \bcT |_{\tHS}$ and mode-$k$ singular subspace estimation in Schatten-$q$ norm $| \sin \Theta (\widehat{\U}_k, \U_k) |_q$ for any $q \geq 1$. We show the upper bounds of mode-$k$ singular subspace estimation are unilateral and converge linearly to a quantity characterized by blockwise errors of the perturbation and signal strength. For the tensor reconstruction error bound, we express the bound through a simple quantity $\xi$, which depends only on perturbation and the multilinear rank of the underlying signal. Rate matching deterministic lower bound for tensor reconstruction, which demonstrates the optimality of HOOI, is also provided. Furthermore, we prove that one-step HOOI (i.e., HOOI with only a single iteration) is also optimal in terms of tensor reconstruction and can be used to lower the computational cost. The perturbation results are also extended to the case that only partial modes of $\bcT$ have low-rank structure. We support our theoretical results by extensive numerical studies. Finally, we apply the novel perturbation bounds of HOOI on two applications, tensor denoising and tensor co-clustering, from machine learning and statistics, which demonstrates the superiority of the new perturbation results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yuetian Luo (19 papers)
  2. Garvesh Raskutti (35 papers)
  3. Ming Yuan (71 papers)
  4. Anru R. Zhang (43 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.