Second-order optimization for tensors with fixed tensor-train rank (2011.13395v1)
Abstract: There are several different notions of "low rank" for tensors, associated to different formats. Among them, the Tensor Train (TT) format is particularly well suited for tensors of high order, as it circumvents the curse of dimensionality: an appreciable property for certain high-dimensional applications. It is often convenient to model such applications as optimization over the set of tensors with fixed (and low) TT rank. That set is a smooth manifold. Exploiting this fact, others have shown that Riemannian optimization techniques can perform particularly well on tasks such as tensor completion and special large-scale linear systems from PDEs. So far, however, these optimization techniques have been limited to first-order methods, likely because of the technical hurdles in deriving exact expressions for the Riemannian Hessian. In this paper, we derive a formula and efficient algorithm to compute the Riemannian Hessian on this manifold. This allows us to implement second-order optimization algorithms (namely, the Riemannian trust-region method) and to analyze the conditioning of optimization problems over the fixed TT rank manifold. In settings of interest, we show improved optimization performance on tensor completion compared to first-order methods and alternating least squares (ALS). Our work could have applications in training of neural networks with tensor layers. Our code is freely available.