Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 82 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 40 tok/s Pro
GPT-5 High 38 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 185 tok/s Pro
GPT OSS 120B 465 tok/s Pro
Claude Sonnet 4 30 tok/s Pro
2000 character limit reached

Autoregressive neural quantum states of Fermi Hubbard models (2411.07144v2)

Published 11 Nov 2024 in cond-mat.str-el and cond-mat.dis-nn

Abstract: Neural quantum states (NQS) have emerged as a powerful ansatz for variational quantum Monte Carlo studies of strongly-correlated systems. Here, we apply recurrent neural networks (RNNs) and autoregressive transformer neural networks to the Fermi-Hubbard and the (non-Hermitian) Hatano-Nelson-Hubbard models in one and two dimensions. In both cases, we observe that the convergence of the RNN ansatz is challenged when increasing the interaction strength. We present a physically-motivated and easy-to-implement strategy for improving the optimization, namely, by ramping of the model parameters. Furthermore, we investigate the advantages and disadvantages of the autoregressive sampling property of both network architectures. For the Hatano-Nelson-Hubbard model, we identify convergence issues that stem from the autoregressive sampling scheme in combination with the non-Hermitian nature of the model. Our findings provide insights into the challenges of the NQS approach and make the first step towards exploring strongly-correlated electrons using this ansatz.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.