Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 88 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 17 tok/s Pro
GPT-4o 73 tok/s Pro
GPT OSS 120B 464 tok/s Pro
Kimi K2 190 tok/s Pro
2000 character limit reached

Robust approximation of tensor networks: application to grid-free tensor factorization of the Coulomb interaction (2012.13002v2)

Published 23 Dec 2020 in physics.chem-ph

Abstract: Approximation of a tensor network by approximating (e.g., factorizing) one or more of its constituent tensors can be improved by canceling the leading-order error due to the constituents' approximation. The utility of such robust approximation is demonstrated for robust canonical polyadic (CP) approximation of a (density-fitting) factorized 2-particle Coulomb interaction tensor. The resulting algebraic (grid-free) approximation for the Coulomb tensor, closely related to the factorization appearing in pseudospectral and tensor hypercontraction approaches, is efficient and accurate, with significantly reduced rank compared to the naive (non-robust) approximation. Application of the robust approximation to the particle-particle ladder term in the coupled-cluster singles and doubles reduces the size complexity from $\mathcal{O}(N6)$ to $\mathcal{O}(N5)$ with robustness ensuring negligible errors in chemically-relevant energy differences using CP ranks approximately equal to the size of the density-fitting basis.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.