Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 62 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 10 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 139 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Adaptive pruning-based optimization of parameterized quantum circuits (2010.00629v1)

Published 1 Oct 2020 in quant-ph

Abstract: Variational hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices. While past studies have developed powerful and expressive ansatze, their near-term applications have been limited by the difficulty of optimizing in the vast parameter space. In this work, we propose a heuristic optimization strategy for such ansatze used in variational quantum algorithms, which we call "Parameter-Efficient Circuit Training" (PECT). Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms, in which each iteration of the algorithm activates and optimizes a subset of the total parameter set. To update the parameter subset between iterations, we adapt the dynamic sparse reparameterization scheme by Mostafa et al. (arXiv:1902.05967). We demonstrate PECT for the Variational Quantum Eigensolver, in which we benchmark unitary coupled-cluster ansatze including UCCSD and k-UpCCGSD, as well as the low-depth circuit ansatz (LDCA), to estimate ground state energies of molecular systems. We additionally use a layerwise variant of PECT to optimize a hardware-efficient circuit for the Sycamore processor to estimate the ground state energy densities of the one-dimensional Fermi-Hubbard model. From our numerical data, we find that PECT can enable optimizations of certain ansatze that were previously difficult to converge and more generally can improve the performance of variational algorithms by reducing the optimization runtime and/or the depth of circuits that encode the solution candidate(s).

Citations (56)

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube