Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 180 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Universal Performance Gap of Neural Quantum States Applied to the Hofstadter-Bose-Hubbard Model (2405.01981v3)

Published 3 May 2024 in quant-ph and cond-mat.dis-nn

Abstract: Neural Quantum States (NQS) have demonstrated significant potential in approximating ground states of many-body quantum systems, though their performance can be inconsistent across different models. This study investigates the performance of NQS in approximating the ground state of the Hofstadter-Bose-Hubbard (HBH) model, an interacting boson system on a two-dimensional square lattice with a perpendicular magnetic field. Our results indicate that increasing magnetic flux leads to a substantial increase in energy error, up to three orders of magnitude. Importantly, this decline in NQS performance is consistent across different optimization methods, neural network architectures, and physical model parameters, suggesting a significant challenge intrinsic to the model. Despite investigating potential causes such as wave function phase structure, quantum entanglement, fractional quantum Hall effect, and the variational loss landscape, the precise reasons for this degradation remain elusive. The HBH model thus proves to be an effective testing ground for exploring the capabilities and limitations of NQS. Our study highlights the need for advanced theoretical frameworks to better understand the expressive power of NQS which would allow a systematic development of methods that could potentially overcome these challenges.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 3 tweets and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: