Papers
Topics
Authors
Recent
Search
2000 character limit reached

Towards Quantum Machine Learning with Tensor Networks

Published 30 Mar 2018 in quant-ph, cond-mat.str-el, and cs.LG | (1803.11537v2)

Abstract: Machine learning is a promising application of quantum computing, but challenges remain as near-term devices will have a limited number of physical qubits and high error rates. Motivated by the usefulness of tensor networks for machine learning in the classical context, we propose quantum computing approaches to both discriminative and generative learning, with circuits based on tree and matrix product state tensor networks that could have benefits for near-term devices. The result is a unified framework where classical and quantum computing can benefit from the same theoretical and algorithmic developments, and the same model can be trained classically then transferred to the quantum setting for additional optimization. Tensor network circuits can also provide qubit-efficient schemes where, depending on the architecture, the number of physical qubits required scales only logarithmically with, or independently of the input or output data sizes. We demonstrate our proposals with numerical experiments, training a discriminative model to perform handwriting recognition using a optimization procedure that could be carried out on quantum hardware, and testing the noise resilience of the trained model.

Citations (310)

Summary

  • The paper introduces an innovative tensor network circuit design that efficiently uses limited qubits by scaling logarithmically or independently from data size.
  • It proposes a hybrid optimization framework that combines classical pre-training with quantum refinement to manage resource overhead.
  • Numerical experiments on MNIST digit classification show over 95% accuracy and strong noise resilience in near-term quantum hardware.

Overview of Towards Quantum Machine Learning with Tensor Networks

The paper explores the potential of combining quantum computing with tensor networks to enhance machine learning capabilities. The authors propose a framework that utilizes quantum algorithms for both discriminative and generative learning tasks by leveraging the structural properties of tensor networks, specifically tree tensor networks and matrix product state architectures. Their approach proposes a unified methodology that allows models to be initialized using classical computations and further optimized in a quantum environment. This transition promises computational benefits, especially for near-term quantum devices.

Contributions and Methodology

The paper provides three primary contributions:

  1. Tensor Network Circuits on Quantum Devices: By leveraging tensor networks, the proposed quantum circuits can operate efficiently on a limited number of qubits, with requirements scaling logarithmically or independently from the data size. This is particularly advantageous for near-term quantum devices that are constrained by the number of available qubits.
  2. Hybrid Optimization Framework: The framework allows for an initial training phase using classical resources, which is followed by further refinement on quantum hardware. Such a strategy aids in managing the quantum resource overhead and simplifies optimization by starting with well-initialized quantum models.
  3. Noise Resilience: The inherent structural advantages of the tensor network allow for a higher degree of noise resilience, which is crucial given the noise levels in current quantum hardware. This is illustrated through numerical experiments, showing promising results even with realistic quantum noise models.

Numerical Experiments and Results

The authors demonstrate their approach by applying it to handwritten digit recognition, training a quantum model to classify pairwise combinations of the MNIST dataset. They employ a discriminative tree tensor network with a specific architecture, achieving test accuracies of over 95% for most digit pairs. These results suggest the model's viability for practical quantum machine learning tasks on near-term devices.

Implications and Future Directions

The implications of this research extend across both theoretical and practical domains:

  • Practical Implications: For quantum applied machine learning, tensor network-based algorithms provide a pragmatic pathway to implement complex models on near-term quantum hardware with limited qubits and high error rates. This is particularly relevant for data-intensive applications where classical methods face resource bottlenecks.
  • Theoretical Implications: The blend of quantum computing with tensor networks underscores interesting theoretical challenges and opportunities. Attributes such as entanglement, locality, and error resilience open avenues for exploring more efficient quantum learning paradigms. Further theoretical work is necessary to derive analytical bounds and guarantees on noise resilience and generalization capabilities.
  • Future Developments: As improvements in quantum hardware continue, exploring larger tensor networks—like PEPS and MERA architectures—or integrating them with other quantum algorithms could enhance model expressivity and scalability. Developing specialized optimization techniques that consider quantum constraints directly during training can also lead to more efficient implementations.

The research presents a compelling case for quantum-tensor network synergy in machine learning contexts, providing a template for future investigations into quantum machine learning architectures that leverage sophisticated computational models such as tensor networks. As quantum technologies progress, the outlined techniques have the potential to contribute significantly to high efficiency and powerful machine learning systems.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.