Towards Using Matrix-Free Tensor Decompositions to Systematically Improve Approximate Tensor-Networks (2503.10380v2)
Abstract: We investigate a novel approach to approximate tensor-network contraction via the exact, matrix-free decomposition of full tensor-networks. We study this method as a means to eliminate the propagation of error in the approximation of tensor-networks. Importantly, this decomposition-based approach is generic, i.e. it does not depend on a specific tensor-network, the tensor index (physical) ordering, or the choice of tensor decomposition. Careful consideration should be made to determine the best decomposition strategy. Furthermore, this method does not rely on robust cancellation of errors (i.e. the Taylor expansion). As a means to study the effectiveness of the approach, we replace the exact contraction of the particle particle ladder (PPL) tensor diagram in the popular coupled-cluster with single and double excitation (CCSD) method with a low-rank tensor decomposition, namely the canonical polyadic decomposition (CPD). With this approach, we replace an $\mathcal{O}(N6)$ tensor contractions with a potentially reduced-scaling $\mathcal{O}(N4R)$ optimization problem, where $R$ is the CP rank, and we reduce the computational storage of the PPL tensor from $\mathcal{O}(N4)$ to $\mathcal{O}(NR)$, although we do not take advantage of this compression in this study. To minimize the cost of the CPD optimization, we utilize the iterative structure of CCSD to efficiently initialize the CPD optimization. We show that accurate chemically-relevant energy values can be computed with an error of less than 1 kcal/mol using a relatively low CP rank.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.