Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

QOC: Quantum On-Chip Training with Parameter Shift and Gradient Pruning (2202.13239v2)

Published 26 Feb 2022 in quant-ph, cs.AR, cs.CV, and cs.LG

Abstract: Parameterized Quantum Circuits (PQC) are drawing increasing research interest thanks to its potential to achieve quantum advantages on near-term Noisy Intermediate Scale Quantum (NISQ) hardware. In order to achieve scalable PQC learning, the training process needs to be offloaded to real quantum machines instead of using exponential-cost classical simulators. One common approach to obtain PQC gradients is parameter shift whose cost scales linearly with the number of qubits. We present QOC, the first experimental demonstration of practical on-chip PQC training with parameter shift. Nevertheless, we find that due to the significant quantum errors (noises) on real machines, gradients obtained from naive parameter shift have low fidelity and thus degrading the training accuracy. To this end, we further propose probabilistic gradient pruning to firstly identify gradients with potentially large errors and then remove them. Specifically, small gradients have larger relative errors than large ones, thus having a higher probability to be pruned. We perform extensive experiments with the Quantum Neural Network (QNN) benchmarks on 5 classification tasks using 5 real quantum machines. The results demonstrate that our on-chip training achieves over 90% and 60% accuracy for 2-class and 4-class image classification tasks. The probabilistic gradient pruning brings up to 7% PQC accuracy improvements over no pruning. Overall, we successfully obtain similar on-chip training accuracy compared with noise-free simulation but have much better training scalability. The QOC code is available in the TorchQuantum library.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hanrui Wang (49 papers)
  2. Zirui Li (43 papers)
  3. Jiaqi Gu (70 papers)
  4. Yongshan Ding (31 papers)
  5. David Z. Pan (70 papers)
  6. Song Han (155 papers)
Citations (43)

Summary

  • The paper introduces a parameter shift technique that computes exact quantum gradients without requiring ancillary qubits.
  • It applies a probabilistic gradient pruning method to filter out noisy gradients, yielding up to a 7% accuracy improvement.
  • Experiments on IBM quantum machines show high-performance QNN benchmarks, with over 90% accuracy in 2-class image classification tasks.

An Evaluation of On-Chip Training of Parameterized Quantum Circuits Using Parameter Shift and Gradient Pruning

The paper entitled "QOC: Quantum On-Chip Training with Parameter Shift and Gradient Pruning" presents a method for practical implementation of on-chip training of Parameterized Quantum Circuits (PQCs). This topic provides substantial insights into bridging the classical and quantum computing paradigms, harnessing real quantum devices to perform these novel computations. The emphasis is on understanding how to efficiently perform PQC training on Noisy Intermediate Scale Quantum (NISQ) systems.

Methodology Overview

The authors begin by introducing a parameter shift methodology for quantum gradient computation, which does not necessitate the restructuring of quantum gates nor the use of ancillary qubits. The method is distinguished by the fact that it calculates exact gradients, thereby eschewing common numerical approximation pitfalls.

A significant innovation in this work is the application of a probabilistic gradient pruning method. This technique is developed as a response to the noise challenges inherent in NISQ devices. It intelligently prunes low-fidelity gradients, which are likely to be corrupted by noise, allowing for a cleaner gradient signal to guide the optimization process.

Experimental Results

The experimental setup validates the proposed training framework across various quantum neural network (QNN) benchmarks involving tasks like image and vowel recognition, utilizing multiple IBM quantum machines. Notably, their experiments demonstrate that their training approach can achieve high accuracy rates (over 90% accuracy on 2-class image classification tasks) on real quantum devices, closely approaching the results obtained from noise-free simulations. Furthermore, the gradient pruning method provides up to a 7% accuracy improvement in the final PQC training accuracy on real hardware, all while retaining computational efficiency.

Discussion of Implications

The practical on-chip training approach demonstrated in this paper presents notable theoretical and practical implications. From a theoretical standpoint, the success of the parameter shift and gradient pruning method suggests new avenues for exploring not just PQCs but other quantum algorithms potentially hampered by hardware noise and scalability issues.

Practically, the research offers a pathway for enhanced machine learning tasks using quantum computers, one where parameters are optimized directly on real quantum hardware rather than classical simulators. This practice could be a critical step toward scalable quantum models, offering potential quantum computational speedups in machine learning and other fields.

Future Directions

Future research might focus on refining the gradient pruning techniques further to accommodate diverse quantum architectures and error conditions. Additionally, examining hybrid algorithms that merge classical and quantum computational advantages, as well as exploring more complex quantum algorithms developed from this architecture, could be promising. The broader use of their ADEPT framework, available in the TorchQuantum library, also provides fertile ground for application-driven experiments that can further corroborate the scalability and efficacy of their approach.

Overall, while this work does not claim to entirely solve the challenges present with quantum machine learning on NISQ devices, it does provide a crucial step forward by empirically validating a framework that can work within these constraints with promising results.

Youtube Logo Streamline Icon: https://streamlinehq.com