Cross Interpolation for Solving High-Dimensional Dynamical Systems on Low-Rank Tucker and Tensor Train Manifolds (2403.12826v2)
Abstract: We present a novel tensor interpolation algorithm for the time integration of nonlinear tensor differential equations (TDEs) on the tensor train and Tucker tensor low-rank manifolds, which are the building blocks of many tensor network decompositions. This paper builds upon our previous work (Donello et al., Proceedings of the Royal Society A, Vol. 479, 2023) on solving nonlinear matrix differential equations on low-rank matrix manifolds using CUR decompositions. The methodology we present offers multiple advantages: (i) It delivers near-optimal computational savings both in terms of memory and floating-point operations by leveraging cross algorithms based on the discrete empirical interpolation method to strategically sample sparse entries of the time-discrete TDEs to advance the solution in low-rank form. (ii) Numerical demonstrations show that the time integration is robust in the presence of small singular values. (iii) High-order explicit Runge-Kutta time integration schemes are developed. (iv) The algorithm is easy to implement, as it requires the evaluation of the full-order model at strategically selected entries and does not use tangent space projections, whose efficient implementation is intrusive. We demonstrate the efficiency of the presented algorithm for several test cases, including a nonlinear 100-dimensional TDE for the evolution of a tensor of size $70{100} \approx 3.2 \times 10{184}$ and a stochastic advection-diffusion-reaction equation with a tensor of size $4.7 \times 109$.
- M. Donello, G. Palkar, M. H. Naderi, D. C. Del Rey Fernández, and H. Babaee, “Oblique projection for scalable rank-adaptive reduced-order modelling of nonlinear stochastic partial differential equations with time-dependent bases,” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 479, no. 2278, p. 20230320, 2023.
- T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM Review, vol. 51, no. 3, pp. 455–500, 2009.
- L. Grasedyck, D. Kressner, and C. Tobler, “A literature survey of low-rank tensor approximation techniques,” GAMM-Mitteilungen, vol. 36, pp. 53–78, 2020/07/06 2013.
- L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” vol. 31, no. 3, pp. 279–311, 1966.
- R. A. Harshman, “Foundations of the PARAFAC procedure: Models and conditions for an ”explanatory” multi-modal factor analysis,” UCLA Working Papers in Phonetics, vol. 16, pp. 1–84, 1970.
- L. Grasedyck, “Hierarchical singular value decomposition of tensors,” SIAM Journal on Matrix Analysis and Applications, vol. 31, pp. 2029–2054, 2023/12/11 2010.
- I. V. Oseledets, “Tensor-train decomposition,” SIAM Journal on Scientific Computing, vol. 33, pp. 2295–2317, 2020/03/23 2011.
- I. Oseledets and E. Tyrtyshnikov, “TT-cross approximation for multidimensional arrays,” Linear Algebra and its Applications, vol. 432, no. 1, pp. 70–88, 2010.
- L. Li, W. Yu, and K. Batselier, “Faster tensor train decomposition for sparse data,” Journal of Computational and Applied Mathematics, vol. 405, p. 113972, 2022.
- K. Batselier, Z. Chen, and N. Wong, “Tensor network alternating linear scheme for MIMO volterra system identification,” Automatica, vol. 84, pp. 26–35, 2017.
- Z. Zhang, X. Yang, I. V. Oseledets, G. E. Karniadakis, and L. Daniel, “Enabling high-dimensional hierarchical uncertainty quantification by ANOVA and tensor-train decomposition,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 34, no. 1, pp. 63–76, 2015.
- D. Kressner and A. Uschmajew, “On low-rank approximability of solutions to high-dimensional operator equations and eigenvalue problems,” Linear Algebra and its Applications, vol. 493, pp. 556–572, 2016.
- Z. Chen, K. Batselier, J. A. K. Suykens, and N. Wong, “Parallelized tensor train learning of polynomial classifiers,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 10, pp. 4621–4632, 2018.
- Y. Wang, W. Zhang, Z. Yu, Z. Gu, H. Liu, Z. Cai, C. Wang, and S. Gao, “Support vector machine based on low-rank tensor train decomposition for big data applications,” in 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), pp. 850–853, 2017.
- W. Wang, V. Aggarwal, and S. Aeron, “Tensor train neighborhood preserving embedding,” IEEE Transactions on Signal Processing, vol. 66, no. 10, pp. 2724–2732, 2018.
- A. Cichocki, “Era of big data processing: A new approach via tensor networks and tensor decompositions,” 2014.
- O. Koch and C. Lubich, “Dynamical tensor approximation,” SIAM Journal on Matrix Analysis and Applications, vol. 31, pp. 2360–2375, 2017/04/02 2010.
- C. Lubich, B. Vandereycken, and H. Walach, “Time integration of rank-constrained Tucker tensors,” SIAM Journal on Numerical Analysis, vol. 56, no. 3, pp. 1273–1290, 2018.
- A. Veit and L. R. Scott, “Using the tensor-train approach to solve the ground-state eigenproblem for hydrogen molecules,” SIAM Journal on Scientific Computing, vol. 39, no. 1, pp. B190–B220, 2017.
- S. Dolgov, D. Kalise, and K. K. Kunisch, “Tensor decomposition methods for high-dimensional Hamilton-Jacobi-Bellman equations,” SIAM Journal on Scientific Computing, vol. 43, no. 3, pp. A1625–A1650, 2021.
- N. Gourianov, M. Lubasch, S. Dolgov, Q. Y. van den Berg, H. Babaee, P. Givi, M. Kiffner, and D. Jaksch, “A quantum-inspired approach to exploit turbulence structures,” Nature Computational Science, vol. 2, no. 1, pp. 30–37, 2022.
- I. Gavrilyuk and B. N. Khoromskij, “Tensor numerical methods: Actual theory and recent applications,” Computational Methods in Applied Mathematics, vol. 19, no. 1, pp. 1–4, 2019.
- M. Donello, M. H. Carpenter, and H. Babaee, “Computing sensitivities in evolutionary systems: A real-time reduced order modeling strategy,” SIAM Journal on Scientific Computing, pp. A128–A149, 2022/01/19 2022.
- A. Amiri-Margavi and H. Babaee, “Low-rank solution operator for forced linearized dynamics with unsteady base flows,” 2023.
- M. H. Naderi and H. Babaee, “Adaptive sparse interpolation for accelerating nonlinear stochastic reduced-order modeling with time-dependent bases,” Computer Methods in Applied Mechanics and Engineering, vol. 405, p. 115813, 2023.
- C. Lubich, I. V. Oseledets, and B. Vandereycken, “Time integration of tensor trains,” SIAM Journal on Numerical Analysis, vol. 53, no. 2, pp. 917–941, 2015.
- J. Haegeman, C. Lubich, I. Oseledets, B. Vandereycken, and F. Verstraete, “Unifying time evolution and optimization with matrix product states,” Phys. Rev. B, vol. 94, p. 165116, 2016.
- G. Ceruti and C. Lubich, “An unconventional robust integrator for dynamical low-rank approximation,” BIT Numerical Mathematics, vol. 62, no. 1, pp. 23–44, 2022.
- C. Lubich and I. V. Oseledets, “A projector-splitting integrator for dynamical low-rank approximation,” BIT Numerical Mathematics, vol. 54, no. 1, pp. 171–188, 2014.
- E. Kieri, C. Lubich, and H. Walach, “Discretized dynamical low-rank approximation in the presence of small singular values,” SIAM Journal on Numerical Analysis, vol. 54, no. 2, pp. 1020–1038, 2016.
- G. Ceruti, J. Kusch, and C. Lubich, “A rank-adaptive robust integrator for dynamical low-rank approximation,” BIT Numerical Mathematics, vol. 62, no. 4, pp. 1149–1174, 2022.
- G. Ceruti, C. Lubich, and D. Sulz, “Rank-adaptive time integration of tree tensor networks,” SIAM Journal on Numerical Analysis, vol. 61, no. 1, pp. 194–222, 2023.
- G. Ceruti, C. Lubich, and H. Walach, “Time integration of tree tensor networks,” SIAM Journal on Numerical Analysis, vol. 59, no. 1, pp. 289–313, 2021.
- G. Ceruti, L. Einkemmer, J. Kusch, and C. Lubich, “A robust second-order low-rank BUG integrator based on the midpoint rule,” 2024.
- E. Kieri and B. Vandereycken, “Projection methods for dynamical low-rank approximation of high-dimensional problems,” Computational Methods in Applied Mathematics, vol. 19, no. 1, pp. 73–92, 2019.
- A. Rodgers, A. Dektor, and D. Venturi, “Adaptive integration of nonlinear evolution equations on tensor manifolds,” Journal of Scientific Computing, vol. 92, no. 2, p. 39, 2022.
- S. Chaturantabut and D. C. Sorensen, “Nonlinear model reduction via discrete empirical interpolation,” SIAM Journal on Scientific Computing, vol. 32, no. 5, pp. 2737–2764, 2010.
- C. Pagliantini and F. Vismara, “Fully adaptive structure-preserving hyper-reduction of parametric hamiltonian systems,” 2023.
- B. Ghahremani and H. Babaee, “A DEIM Tucker tensor cross algorithm and its application to dynamical low-rank approximation,” Computer Methods in Applied Mechanics and Engineering, vol. 423, p. 116879, 2024.
- A. Dektor, “A collocation method for nonlinear tensor differential equations on low-rank manifolds,” 2024.
- C. Lubich, T. Rohwedder, R. Schneider, and B. Vandereycken, “Dynamical approximation by hierarchical Tucker and tensor-train tensors,” SIAM Journal on Matrix Analysis and Applications, vol. 34, no. 2, pp. 470–494, 2013.
- S. Dolgov, D. Kressner, and C. Strössner, “Functional Tucker approximation using Chebyshev interpolation,” SIAM Journal on Scientific Computing, vol. 43, no. 3, pp. A2190–A2210, 2021.
- S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin, “A theory of pseudoskeleton approximations,” Linear Algebra and its Applications, vol. 261, no. 1, pp. 1–21, 1997.
- B. Peherstorfer, Z. Drmač, and S. Gugercin, “Stability of discrete empirical interpolation and gappy proper orthogonal decomposition with randomized and deterministic sampling points,” SIAM Journal on Scientific Computing, vol. 42, no. 5, pp. A2837–A2864, 2020.
- M. W. Mahoney and P. Drineas, “CUR matrix decompositions for improved data analysis,” Proceedings of the National Academy of Sciences, vol. 106, no. 3, pp. 697–702, 2009.
- D. Ramezanian, A. G. Nouri, and H. Babaee, “On-the-fly reduced order modeling of passive and reactive species via time-dependent manifolds,” Computer Methods in Applied Mechanics and Engineering, vol. 382, p. 113882, 2021.