Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 63 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 102 tok/s Pro
Kimi K2 225 tok/s Pro
GPT OSS 120B 450 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Quantum decision trees with information entropy (2502.11412v3)

Published 17 Feb 2025 in quant-ph

Abstract: We present a classification algorithm for quantum states, inspired by decision-tree methods. To adapt the decision-tree framework to the probabilistic nature of quantum measurements, we utilize conditional probabilities to compute information gain, thereby optimizing the measurement scheme. For each measurement shot on an unknown quantum state, the algorithm selects the observable with the highest expected information gain, continuing until convergence. We demonstrate using the simulations that this algorithm effectively identifies quantum states sampled from the Haar random distribution. However, despite not relying on circuit-based quantum neural networks, the algorithm still encounters challenges akin to the barren plateau problem. In the leading order, we show that the information gain is proportional to the variance of the observable's expectation values over candidate states. As the system size increases, this variance, and consequently the information gain, are exponentially suppressed, which poses significant challenges for classifying general Haar-random quantum states. Finally, we apply the quantum decision tree to classify the ground states of various Hamiltonians using physically-motivated observables. On both simulators and quantum computers, the quantum decision tree yields better performances when compared to methods that are not information-optimized. This indicates that the measurement of physically-motivated observables can significantly improve the classification performance, guiding towards the future direction of this approach.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.