Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 14 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 90 tok/s Pro
Kimi K2 179 tok/s Pro
GPT OSS 120B 462 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Transformer Wave Function for two dimensional frustrated magnets: emergence of a Spin-Liquid Phase in the Shastry-Sutherland Model (2311.16889v4)

Published 28 Nov 2023 in cond-mat.str-el and cond-mat.dis-nn

Abstract: Understanding quantum magnetism in two-dimensional systems represents a lively branch in modern condensed-matter physics. In the presence of competing super-exchange couplings, magnetic order is frustrated and can be suppressed down to zero temperature. Still, capturing the correct nature of the exact ground state is a highly complicated task, since energy gaps in the spectrum may be very small and states with different physical properties may have competing energies. Here, we introduce a variational Ansatz for two-dimensional frustrated magnets by leveraging the power of representation learning. The key idea is to use a particular deep neural network with real-valued parameters, a so-called Transformer, to map physical spin configurations into a high-dimensional feature space. Within this abstract space, the determination of the ground-state properties is simplified and requires only a shallow output layer with complex-valued parameters. We illustrate the efficacy of this variational Ansatz by studying the ground-state phase diagram of the Shastry-Sutherland model, which captures the low-temperature behavior of SrCu$_2$(BO$_3$)$_2$ with its intriguing properties. With highly accurate numerical simulations, we provide strong evidence for the stabilization of a spin-liquid between the plaquette and antiferromagnetic phases. In addition, a direct calculation of the triplet excitation at the $\Gamma$ point provides compelling evidence for a gapless spin liquid. Our findings underscore the potential of Neural-Network Quantum States as a valuable tool for probing uncharted phases of matter, and open up new possibilities for establishing the properties of many-body systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. R. B. Laughlin, Phys. Rev. Lett. 50, 1395 (1983).
  2. L. Savary and L. Balents, Rep. Prog. Phys. 80, 016502 (2017).
  3. A. Kitaev, Ann. Phys. 321, 2 (2006), january Special Issue.
  4. M. Norman, Rev. Mod. Phys. 88, 041002 (2016).
  5. B. Shastry and B. Sutherland, Physica B+C 108, 1069 (1981).
  6. S. Miyahara and K. Ueda, Phys. Rev. Lett. 82, 3701 (1999).
  7. P. Corboz and F. Mila, Phys. Rev. Lett. 112, 147203 (2014).
  8. M. Albrecht and F. Mila, Europhysics Letters 34, 145 (1996).
  9. A. Koga and N. Kawakami, Phys. Rev. Lett. 84, 4461 (2000).
  10. P. Corboz and F. Mila, Phys. Rev. B 87, 115144 (2013).
  11. W.-Y. Liu, X.-T. Zhang, Z. Wang, S.-S. Gong, W.-Q. Chen,  and Z.-C. Gu, “Deconfined quantum criticality with emergent symmetry in the extended shastry-sutherland model,”  (2023), arXiv:2309.10955 [cond-mat.str-el] .
  12. A. Keleş and E. Zhao, Phys. Rev. B 105, L041115 (2022).
  13. F. Becca and S. Sorella, Quantum Monte Carlo Approaches for Correlated Systems (Cambridge University Press, 2017).
  14. P. Fazekas and P. Anderson, Phil. Mag. 30, 423 (1974).
  15. P. Anderson, Science 235, 1196 (1987).
  16. Z. Zhu and S. R. White, Phys. Rev. B 92, 041105 (2015).
  17. G. Carleo and M. Troyer, Science 355, 602 (2017).
  18. A. Szabó and C. Castelnovo, Phys. Rev. Res. 2, 033075 (2020).
  19. M. Hibat-Allah, R. G. Melko,  and J. Carrasquilla, “Supplementing recurrent neural network wave functions with symmetry and annealing to improve accuracy,”  (2022), arXiv:2207.14314 [cond-mat.dis-nn] .
  20. A. Chen and M. Heyl, “Efficient optimization of deep neural quantum states toward machine precision,”  (2023), arXiv:2302.01941 [cond-mat.dis-nn] .
  21. M. Mezera, J. Menšíková, P. Baláž,  and M. Žonda, “Neural network quantum states analysis of the shastry-sutherland model,”  (2023), arXiv:2303.14108 [cond-mat.dis-nn] .
  22. R. Rende, L. L. Viteritti, L. Bardone, F. Becca,  and S. Goldt, “A simple linear algebra identity to optimize large-scale neural network quantum states,”  (2023a), arXiv:2310.05715 [cond-mat.str-el] .
  23. Y. Nomura and M. Imada, Phys. Rev. X 11, 031034 (2021).
  24. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit,  and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,”  (2021).
  25. K. Sprague and S. Czischek, “Variational monte carlo with large patched transformers,”  (2023), arXiv:2306.03921 [quant-ph] .
  26. Y. Bengio, A. Courville,  and P. Vincent, “Representation learning: A review and new perspectives,”  (2014), arXiv:1206.5538 [cs.LG] .
  27. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, L. Kaiser,  and I. Polosukhin, “Attention is all you need,”  (2017).
  28. X. Liang, M. Li, Q. Xiao, H. An, L. He, X. Zhao, J. Chen, C. Yang, F. Wang, H. Qian, L. Shen, D. Jia, Y. Gu, X. Liu,  and Z. Wei, “21296superscript212962^{1296}2 start_POSTSUPERSCRIPT 1296 end_POSTSUPERSCRIPT exponentially complex quantum many-body simulation via scalable deep learning method,”  (2022), arXiv:2204.07816 [quant-ph] .
  29. S. Sorella, Phys. Rev. B 71, 241103 (2005).
  30. Y. Nomura, Journal of Physics: Condensed Matter 33, 174003 (2021).
  31. W. Marshall, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 232, 48 (1955).
  32. L. McInnes, J. Healy,  and J. Melville, “Umap: Uniform manifold approximation and projection for dimension reduction,”  (2020), arXiv:1802.03426 [stat.ML] .
  33. R. Rende, F. Gerace, A. Laio,  and S. Goldt, “Optimal inference of a generalised potts model by single-layer transformers with factored attention,”  (2023b), arXiv:2304.07235 .
  34. R. Xiong, Y. Yang, D. He, K. Zheng, S. Zheng, C. Xing, H. Zhang, Y. Lan, L. Wang,  and T.-Y. Liu, “On layer normalization in the transformer architecture,”  (2020), arXiv:2002.04745 [cs.LG] .
  35. Z. Dai, H. Liu, Q. V. Le,  and M. Tan, “Coatnet: Marrying convolution and attention for all data sizes,”  (2021), arXiv:2106.04803 .
Citations (17)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube