Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 160 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 41 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 417 tok/s Pro
Claude Sonnet 4.5 39 tok/s Pro
2000 character limit reached

Evolved Quantum Boltzmann Machines (2501.03367v2)

Published 6 Jan 2025 in quant-ph and cond-mat.stat-mech

Abstract: We introduce evolved quantum Boltzmann machines as a variational ansatz for quantum optimization and learning tasks. Given two parameterized Hamiltonians $G(\theta)$ and $H(\phi)$, an evolved quantum Boltzmann machine consists of preparing a thermal state of the first Hamiltonian $G(\theta)$ followed by unitary evolution according to the second Hamiltonian $H(\phi)$. Alternatively, one can think of it as first realizing imaginary time evolution according to $G(\theta)$ followed by real time evolution according to $H(\phi)$. After defining this ansatz, we provide analytical expressions for the gradient vector and illustrate their application in ground-state energy estimation and generative modeling, showing how the gradient for these tasks can be estimated by means of quantum algorithms that involve classical sampling, Hamiltonian simulation, and the Hadamard test. We also establish analytical expressions for the Fisher-Bures, Wigner-Yanase, and Kubo-Mori information matrix elements of evolved quantum Boltzmann machines, as well as quantum algorithms for estimating each of them, which leads to at least three different general natural gradient descent algorithms based on this ansatz. Along the way, we establish a broad generalization of the main result of [Luo, Proc. Am. Math. Soc. 132, 885 (2004)], proving that the Fisher-Bures and Wigner-Yanase information matrices of general parameterized families of states differ by no more than a factor of two in the matrix (Loewner) order, making them essentially interchangeable for training when using natural gradient descent.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 66 likes.

Upgrade to Pro to view all of the tweets about this paper: