Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 83 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 16 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 109 tok/s Pro
Kimi K2 181 tok/s Pro
GPT OSS 120B 468 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

A Parallel Computing Method for the Higher Order Tensor Renormalization Group (2110.03607v1)

Published 7 Oct 2021 in hep-lat

Abstract: In this paper, we propose a parallel computing method for the Higher Order Tensor Renormalization Group (HOTRG) applied to a $d$-dimensional $( d \geq 2 )$ simple lattice model. Sequential computation of the HOTRG requires $O ( \chi{4 d - 1} )$ computational cost, where $\chi$ is bond dimension, in a step to contract indices of tensors. When we simply distribute elements of a local tensor to each process in parallel computing of the HOTRG, frequent communication between processes occurs. The simplest way to avoid such communication is to hold all the tensor elements in each process, however, it requires $O ( \chi{2d} )$ memory space. In the presented method, placement of a local tensor element to more than one process is accepted and sufficient local tensor elements are distributed to each process to avoid communication between processes during considering computation step. For the bottleneck part of computational cost, such distribution is achieved by distributing elements of two local tensors to $\chi2$ processes according to one of the indices of each local tensor which are not contracted during considering computation. In the case of $d \geq 3$, computational cost in each process is reduced to $O ( \chi{4 d - 3} )$ and memory space requirement in each process is kept to be $O ( \chi{2d - 1} )$.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.