Optimizing Tensor Network Partitioning using Simulated Annealing (2507.20667v1)
Abstract: Tensor networks have proven to be a valuable tool, for instance, in the classical simulation of (strongly correlated) quantum systems. As the size of the systems increases, contracting larger tensor networks becomes computationally demanding. In this work, we study distributed memory architectures intended for high-performance computing implementations to solve this task. Efficiently distributing the contraction task across multiple nodes is critical, as both computational and memory costs are highly sensitive to the chosen partitioning strategy. While prior work has employed general-purpose hypergraph partitioning algorithms, these approaches often overlook the specific structure and cost characteristics of tensor network contractions. We introduce a simulated annealing-based method that iteratively refines the partitioning to minimize the total operation count, thereby reducing time-to-solution. The algorithm is evaluated on MQT Bench circuits and achieves an 8$\times$ average reduction in computational cost and an 8$\times$ average reduction in memory cost compared to a naive partitioning.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.