Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Undercomplete Decomposition of Symmetric Tensors in Linear Time, and Smoothed Analysis of the Condition Number (2403.00643v2)

Published 1 Mar 2024 in cs.DS, cs.CC, cs.NA, and math.NA

Abstract: We study symmetric tensor decompositions, i.e., decompositions of the form $T = \sum_{i=1}r u_i{\otimes 3}$ where $T$ is a symmetric tensor of order 3 and $u_i \in \mathbb{C}n$.In order to obtain efficient decomposition algorithms, it is necessary to require additional properties from $u_i$. In this paper we assume that the $u_i$ are linearly independent.This implies $r \leq n$,that is, the decomposition of T is undercomplete. We give a randomized algorithm for the following problem in the exact arithmetic model of computation: Let $T$ be an order-3 symmetric tensor that has an undercomplete decomposition. Then given some $T'$ close to $T$, an accuracy parameter $\varepsilon$, and an upper bound B on the condition number of the tensor, output vectors $u'_i$ such that $||u_i - u'_i|| \leq \varepsilon$ (up to permutation and multiplication by cube roots of unity) with high probability. The main novel features of our algorithm are: 1) We provide the first algorithm for this problem that runs in linear time in the size of the input tensor. More specifically, it requires $O(n3)$ arithmetic operations for all accuracy parameters $\varepsilon =$ 1/poly(n) and B = poly(n). 2) Our algorithm is robust, that is, it can handle inverse-quasi-polynomial noise (in $n$,B,$\frac{1}{\varepsilon}$) in the input tensor. 3) We present a smoothed analysis of the condition number of the tensor decomposition problem. This guarantees that the condition number is low with high probability and further shows that our algorithm runs in linear time, except for some rare badly conditioned inputs. Our main algorithm is a reduction to the complete case ($r=n$) treated in our previous work [Koiran,Saha,CIAC 2023]. For efficiency reasons we cannot use this algorithm as a blackbox. Instead, we show that it can be run on an implicitly represented tensor obtained from the input tensor by a change of basis.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. Pencil-based algorithms for tensor rank decomposition are not stable. SIAM Journal on Matrix Analysis and Applications, 40(2):739–773, 2019.
  2. P. Bürgisser and F. Cucker. Condition: The Geometry of Numerical Algorithms. Grundlehren der mathematischen Wissenschaften. Springer Berlin Heidelberg, 2013.
  3. Smoothed Analysis of Tensor Decompositions. In Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing, STOC, 2014.
  4. Complexity and Real Computation. Springer-Verlag, 1998.
  5. Pseudospectral shattering, the sign function, and diagonalization in nearly matrix multiplication time. Foundations of Computational Mathematics, Aug 2022. Preliminary version in Symposium on Foundations of Computer Science (FOCS), 2020.
  6. When can forward stable algorithms be composed stably? IMA Journal of Numerical Analysis, page drad026, 05 2023.
  7. On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines. Bulletin of the American Mathematical Society, 21(1):1–46, July 1989.
  8. Distributional and Lqsuperscript𝐿𝑞L^{q}italic_L start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT norm inequalities for polynomials over convex bodies in ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. Mathematical Research Letters, 8:233–248, 2001.
  9. Fast linear algebra is stable. Numerische Mathematik, 108(1), 2007.
  10. Fast matrix multiplication is stable. Numerische Mathematik, 106(2), 2007.
  11. A PSPACE Construction of a Hitting Set for the Closure of Small Algebraic Circuits. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC, 2018.
  12. Improved rectangular matrix multiplication using powers of the Coppersmith-Winograd tensor. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA). 2018.
  13. RA Harshman. Foundations of the PARAFAC procedure: Models and conditions for an “explanatory" multi-mode factor analysis. UCLA Working Papers in Phonetics, 1970.
  14. Johan Håstad. Tensor rank is NP-complete. In Automata, Languages and Programming. Springer Berlin Heidelberg, 1989.
  15. Neeraj Kayal. Efficient algorithms for some special cases of the polynomial equivalence problem. In Symposium on Discrete Algorithms (SODA). Society for Industrial and Applied Mathematics, January 2011.
  16. Joseph B. Kruskal. Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear Algebra and its Applications, 18(2), 1977.
  17. Derandomization and Absolute Reconstruction for Sums of Powers of Linear Forms. Theor. Comput. Sci., 887, 2021.
  18. Absolute reconstruction for sums of powers of linear forms: degree 3 and beyond. Computational Complexity, 32(2), August 2023.
  19. Complete decomposition of symmetric tensors in linear time and polylogarithmic precision. In 13th International Conference on Algorithms and Complexity (CIAC 2023), Full version on arXiv, 2023.
  20. B. Laurent and P. Massart. Adaptive estimation of a quadratic functional by model selection. The Annals of Statistics, 28(5):1302–1338, 2000.
  21. Ankur Moitra. Algorithmic aspects of machine learning. Cambridge University Press, 2018.
  22. Subhayan Saha. Algebraic and Numerical Algorithms for Symmetric Tensor Decompositions. Theses, Ecole normale supérieure de lyon - ENS LYON, December 2023.
  23. Yaroslav Shitov. How hard is the tensor rank?, 2016.
Citations (1)

Summary

We haven't generated a summary for this paper yet.