Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
86 tokens/sec
GPT-4o
11 tokens/sec
Gemini 2.5 Pro Pro
52 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Classification with Quantum Neural Networks on Near Term Processors (1802.06002v2)

Published 16 Feb 2018 in quant-ph

Abstract: We introduce a quantum neural network, QNN, that can represent labeled data, classical or quantum, and be trained by supervised learning. The quantum circuit consists of a sequence of parameter dependent unitary transformations which acts on an input quantum state. For binary classification a single Pauli operator is measured on a designated readout qubit. The measured output is the quantum neural network's predictor of the binary label of the input state. First we look at classifying classical data sets which consist of n-bit strings with binary labels. The input quantum state is an n-bit computational basis state corresponding to a sample string. We show how to design a circuit made from two qubit unitaries that can correctly represent the label of any Boolean function of n bits. For certain label functions the circuit is exponentially long. We introduce parameter dependent unitaries that can be adapted by supervised learning of labeled data. We study an example of real world data consisting of downsampled images of handwritten digits each of which has been labeled as one of two distinct digits. We show through classical simulation that parameters can be found that allow the QNN to learn to correctly distinguish the two data sets. We then discuss presenting the data as quantum superpositions of computational basis states corresponding to different label values. Here we show through simulation that learning is possible. We consider using our QNN to learn the label of a general quantum state. By example we show that this can be done. Our work is exploratory and relies on the classical simulation of small quantum systems. The QNN proposed here was designed with near-term quantum processors in mind. Therefore it will be possible to run this QNN on a near term gate model quantum computer where its power can be explored beyond what can be explored with simulation.

Citations (753)

Summary

  • The paper introduces a QNN framework that uses parameterized unitary transformations for supervised learning in binary classification tasks.
  • The paper demonstrates the network's ability to represent Boolean functions and achieve low error rates in simulations, including downsampled MNIST digit recognition.
  • The paper highlights scalability challenges and future prospects of quantum batch learning for processing both classical labels and coherent quantum states.

Classification with Quantum Neural Networks on Near-Term Processors

The paper presented by Farhi and Neven addresses the potential of quantum neural networks (QNNs) to classify labeled data leveraging near-term quantum processors. The authors propose a framework for QNNs capable of representing both classical and quantum data, utilizing parameter-dependent unitary transformations and supervised learning to adjust these parameters for better prediction accuracy.

Key Contributions

The QNN introduced in this paper is designed around a quantum circuit composed of a sequence of parameterized unitary transformations acting on input quantum states. For binary classification tasks, the network measures a Pauli operator on a designated readout qubit, which serves as the predictor of the binary label for the input state. The research explores various facets of QNNs, including:

  1. Representation Capability: The authors show that their QNN can represent any Boolean function of n-bit inputs, utilizing circuits made from two-qubit unitaries. While QNNs can compactly represent some functions, they acknowledge that for certain label functions, such as subset parity, the circuit could require exponential depth.
  2. Supervised Learning: The paper details how one can perform supervised learning on QNNs using stochastic gradient descent to minimize sample loss and find optimal parameters. This learning process involves estimating the gradient of the sample loss concerning the parameters and updating these parameters to decrease the loss iteratively.
  3. Empirical Tests: The paper includes simulations on small scale quantum devices for classification tasks, such as digit recognition using the MNIST dataset, to demonstrate the feasibility of QNNs. They use downsampled images to fit within computational limits, achieving low classification errors through simulations.
  4. Quantum Superpositions: Beyond classical data, they test the efficacy of QNNs on superpositions of computational basis states, demonstrating potential for quantum batch learning by presenting classically labeled data as coherent quantum superpositions.
  5. Future Outlook: The research speculates on the use of QNNs to classify general quantum states—something classical counterparts cannot achieve. This includes learning a binary label associated with the expected value of a Hamiltonian across quantum states.

Numerical and Seminal Implications

The exploration into small-scale QNNs through classical simulation finds that they can indeed perform classification tasks on datasets of practical interest, like the MNIST dataset, with commendable accuracy. The research serves as a preparatory step for future implementations on gate model quantum computers, as it provides insights into how such quantum neural networks can be configured and trained effectively.

Challenges and Directions

While the computational results are promising, the paper highlights significant challenges in scaling. The training efficiency is impeded due to the limited size of simulative environments, constrained by exponential growth in Hilbert space dimensions as qubits increase. Furthermore, subset parity learning complexity illustrates possible hurdles in efficiently identifying optimal parameter settings for high-dimensional datasets.

The paper outlines potent areas for future exploration. This includes refining QNN architectures further and potentially amalgamating classical-quantum hybrid networks, thereby extending the boundaries of quantum advantage. The potential capability of QNNs to handle quantum-only data underscores a unique quantum utility, distinct from classical machine learning models.

In conclusion, Farhi and Neven's work is a substantial stride in speculative quantum computing, demonstrating tools and strategies to maximize the upcoming era's quantum processors. As technological and theoretical advances continue, frameworks such as the one proposed here will be crucial to understanding the role of quantum computing in complex data classification problems.

Youtube Logo Streamline Icon: https://streamlinehq.com