Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantum algorithms for feedforward neural networks (1812.03089v2)

Published 7 Dec 2018 in quant-ph and cs.LG

Abstract: Quantum machine learning has the potential for broad industrial applications, and the development of quantum algorithms for improving the performance of neural networks is of particular interest given the central role they play in machine learning today. In this paper we present quantum algorithms for training and evaluating feedforward neural networks based on the canonical classical feedforward and backpropagation algorithms. Our algorithms rely on an efficient quantum subroutine for approximating the inner products between vectors in a robust way, and on implicitly storing large intermediate values in quantum random access memory for fast retrieval at later stages. The running times of our algorithms can be quadratically faster in the size of the network than their standard classical counterparts since they depend linearly on the number of neurons in the network, as opposed to the number of connections between neurons as in the classical case. This makes our algorithms suited for large-scale, highly-connected networks where the number of edges in the network dominates the classical algorithmic running time. Furthermore, networks trained by our quantum algorithm may have an intrinsic resilience to overfitting, as the algorithm naturally mimics the effects of classical techniques such as drop-out used to regularize networks. Our algorithms can also be used as the basis for new quantum-inspired classical algorithms which have the same dependence on the network dimensions as their quantum counterparts, but with quadratic overhead in other parameters that makes them relatively impractical.

Citations (55)

Summary

We haven't generated a summary for this paper yet.