Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 73 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 185 tok/s Pro
GPT OSS 120B 441 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Operator backflow and the classical simulation of quantum transport (2111.09904v1)

Published 18 Nov 2021 in cond-mat.str-el

Abstract: Tensor product states have proved extremely powerful for simulating the area-law entangled states of many-body systems, such as the ground states of gapped Hamiltonians in one dimension. The applicability of such methods to the \emph{dynamics} of many-body systems is less clear: the memory required grows exponentially in time in most cases, quickly becoming unmanageable. New methods reduce the memory required by selectively discarding/dissipating parts of the many-body wavefunction which are expected to have little effect on the hydrodynamic observables typically of interest: for example, some methods discard fine-grained correlations associated with $n$-point functions, with $n$ exceeding some cutoff $\ell_$. In this work, we present a theory for the sizes of `backflow corrections', i.e., systematic errors due to discarding this fine-grained information. In particular, we focus on their effect on transport coefficients. Our results suggest that backflow corrections are exponentially suppressed in the size of the cutoff $\ell_$. Moreover, the backflow errors themselves have a hydrodynamical expansion, which we elucidate. We test our predictions against numerical simulations run on random unitary circuits and ergodic spin-chains. These results lead to the conjecture that transport coefficients in ergodic diffusive systems can be captured to a given precision $\epsilon$ with an amount of memory scaling as $\exp[\mathcal{O}(\log(\epsilon)2)]$, significantly better than the naive estimate of memory $\exp[\mathcal{O}(\mathrm{poly}(\epsilon{-1}))]$ required by more brute-force methods.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube