Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scalable Quantum Convolutional Neural Networks (2209.12372v2)

Published 26 Sep 2022 in quant-ph and cs.AI

Abstract: With the beginning of the noisy intermediate-scale quantum (NISQ) era, quantum neural network (QNN) has recently emerged as a solution for the problems that classical neural networks cannot solve. Moreover, QCNN is attracting attention as the next generation of QNN because it can process high-dimensional vector input. However, due to the nature of quantum computing, it is difficult for the classical QCNN to extract a sufficient number of features. Motivated by this, we propose a new version of QCNN, named scalable quantum convolutional neural network (sQCNN). In addition, using the fidelity of QC, we propose an sQCNN training algorithm named reverse fidelity training (RF-Train) that maximizes the performance of sQCNN.

Citations (6)

Summary

  • The paper presents a scalable architecture for quantum CNNs that overcomes barren plateaus by employing multiple quantum circuits instead of increasing qubit counts per circuit.
  • It introduces Reverse Fidelity Training to optimize quantum state fidelity, significantly diversifying feature extraction and enhancing classification accuracy.
  • Experimental results on MNIST and FMNIST demonstrate improved accuracy and effective feature representation, validating both the design's scalability and practical applicability.

An Overview of Scalable Quantum Convolutional Neural Networks

This paper introduces a novel approach to implementing convolutional neural networks (CNNs) in the quantum computing domain, with a particular focus on scalability, termed as Scalable Quantum Convolutional Neural Networks (sQCNNs). The authors emphasize overcoming the conventional challenges faced by Quantum Convolutional Neural Networks (QCNNs), specifically the difficulties of feature extraction due to quantum computing's unique nature.

Key Propositions

  1. Scalable Architecture: The primary contribution is the design of a scalable architecture for QCNNs. Unlike traditional QCNNs, which are constrained by the limited number of features directly tied to the qubits in a single quantum circuit, sQCNNs achieve scalability by increasing the number of quantum circuits (or filters) employed. This method effectively sidesteps the issue of barren plateaus—wherein gradients vanish as the number of qubits grows by maintaining fewer qubits per circuit and increasing the number of circuits.
  2. Reverse Fidelity Training (RF-Train): The paper introduces an innovative training algorithm known as Reverse Fidelity Training (RF-Train), which leverages the fidelity between quantum states, a haLLMark of quantum information theory, to diversify the features extracted by the multiple filters within sQCNN. By optimizing fidelity to enhance feature diversity, RF-Train improves classification performance significantly.

Experimental Validation

The paper provides a comprehensive experimental evaluation, applying sQCNN on MNIST and FMNIST datasets. Key takeaways from the model's performance include:

  • Enhanced Accuracy: On both datasets, sQCNN with RF-train achieved higher top-1 accuracy compared to classical QCNNs utilizing vanilla training. Particularly, sQCNN with an increased RF-regularizer parameter outperformed traditional models by a substantial margin.
  • Improved Feature Diversity: The experimental findings underscore that an increase in RF-regularizer parameter leads to heightened Euclidean distances between extracted features, translating to more effective classification.
  • Scalability Demonstrated: Unlike traditional QCNNs, which experience decreased accuracy upon increasing the number of qubits, sQCNN exhibited stable performance improvements with additional filters, thus validating its scalable design.

Theoretical and Practical Implications

The theoretical implications of this research touch upon the intersection of quantum computing and machine learning, offering a solution to train deep quantum architectures without significant degradation in performance. Practically, this work can influence sectors that harness quantum computational power for complex data modeling tasks, such as materials science, cryptography, and large-scale data analysis.

Future Directions

The work sets the stage for further exploration into the application of sQCNNs across varied quantum-friendly tasks. Future research might explore adapting these architectures for different datasets or integrating more sophisticated quantum operations to further enhance the learning capability and efficiency of quantum models. Moreover, there is potential to explore hybrid models combining both classical and quantum layers to bridge existing computational gaps and boost processing efficiencies.

Overall, this paper contributes significantly to quantum machine learning, propelling QCNNs from theoretical constructs towards practical applicability while maintaining scalability and performance efficacy. The presented scalable framework provides a critical stepping stone for subsequent innovations within quantum neural networks.

Youtube Logo Streamline Icon: https://streamlinehq.com