Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tensor Star Tensor Decomposition and Its Applications to Higher-order Compression and Completion (2403.10481v2)

Published 15 Mar 2024 in eess.IV and eess.SP

Abstract: A novel tensor decomposition framework, termed Tensor Star (TS) decomposition, is proposed which represents a new type of tensor network decomposition based on tensor contractions. This is achieved by connecting the core tensors in a ring shape, whereby the core tensors act as skip connections between the factor tensors and allow for direct correlation characterisation between any two arbitrary dimensions. Uniquely, this makes it possible to decompose an order-$N$ tensor into $N$ order-$3$ factor tensors ${\mathcal{G}{k}}{k=1}{N}$ and $N$ order-$4$ core tensors ${\mathcal{C}{k}}{k=1}{N}$, which are arranged in a star shape. Unlike the class of Tensor Train (TT) decompositions, these factor tensors are not directly connected to one another. The so obtained core tensors also enable consecutive factor tensors to have different latent ranks. In this way, the TS decomposition alleviates the "curse of dimensionality" and controls the "curse of ranks", exhibiting a storage complexity which scales linearly with the number of dimensions and as the fourth power of the ranks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (9)
  1. F. L. Hitchcock, “The expression of a tensor or a polyadic as a sum of products,” Journal of Mathematics and Physics, vol. 6, no. 1-4, pp. 164–189, 1927.
  2. L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika, vol. 31, no. 3, pp. 279–311, 1966.
  3. L. De Lathauwer, B. De Moor, and J. Vandewalle, “A multilinear singular value decomposition,” SIAM Journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1253–1278, 2000.
  4. I. V. Oseledets, “Tensor-train decomposition,” SIAM Journal on Scientific Computing, vol. 33, no. 5, pp. 2295–2317, 2011.
  5. Q. Zhao, G. Zhou, S. Xie, L. Zhang, and A. Cichocki, “Tensor ring decomposition,” arXiv preprint arXiv:1606.05535, 2016.
  6. Z.-C. Wu, T.-Z. Huang, L.-J. Deng, H.-X. Dou, and D. Meng, “Tensor wheel decomposition and its tensor completion application,” Advances in Neural Information Processing Systems, vol. 35, pp. 27008–27020, 2022.
  7. Y.-B. Zheng, T.-Z. Huang, X.-L. Zhao, Q. Zhao, and T.-X. Jiang, “Fully-connected tensor network decomposition and its application to higher-order tensor completion,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 11071–11078, 2021.
  8. J. Bolte, S. Sabach, and M. Teboulle, “Proximal alternating linearized minimization for nonconvex and nonsmooth problems,” Mathematical Programming, vol. 146, no. 1-2, pp. 459–494, 2014.
  9. H. Attouch, J. Bolte, and B. F. Svaiter, “Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward–backward splitting, and regularized gauss–seidel methods,” Mathematical Programming, vol. 137, no. 1-2, pp. 91–129, 2013.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com