Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 167 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 39 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 429 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Trainability maximization using estimation of distribution algorithms assisted by surrogate modelling for quantum architecture search (2407.20091v1)

Published 29 Jul 2024 in quant-ph

Abstract: Quantum architecture search (QAS) involves optimizing both the quantum parametric circuit configuration but also its parameters for a variational quantum algorithm. Thus, the problem is known to be multi-level as the performance of a given architecture is unknown until its parameters are tuned using classical routines. Moreover, the task becomes even more complicated since well-known trainability issues, e.g., barren plateaus (BPs), can occur. In this paper, we aim to achieve two improvements in QAS: (1) to reduce the number of measurements by an online surrogate model of the evaluation process that aggressively discards architectures of poor performance; (2) to avoid training the circuits when BPs are present. To detect the presence of the BPs, we employed a recently developed metric, information content, which only requires measuring the energy values of a small set of parameters to estimate the magnitude of cost function's gradient. The main idea of this proposal is to leverage a recently developed metric which can be used to detect the onset of vanishing gradients to ensure the overall search avoids such unfavorable regions. We experimentally validate our proposal for the variational quantum eigensolver and showcase that our algorithm is able to find solutions that have been previously proposed in the literature for the Hamiltonians; but also to outperform the state of the art when initializing the method from the set of architectures proposed in the literature. The results suggest that the proposed methodology could be used in environments where it is desired to improve the trainability of known architectures while maintaining good performance.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.