Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Trainability of Dissipative Perceptron-Based Quantum Neural Networks (2005.12458v2)

Published 26 May 2020 in quant-ph and cs.LG

Abstract: Several architectures have been proposed for quantum neural networks (QNNs), with the goal of efficiently performing machine learning tasks on quantum data. Rigorous scaling results are urgently needed for specific QNN constructions to understand which, if any, will be trainable at a large scale. Here, we analyze the gradient scaling (and hence the trainability) for a recently proposed architecture that we called dissipative QNNs (DQNNs), where the input qubits of each layer are discarded at the layer's output. We find that DQNNs can exhibit barren plateaus, i.e., gradients that vanish exponentially in the number of qubits. Moreover, we provide quantitative bounds on the scaling of the gradient for DQNNs under different conditions, such as different cost functions and circuit depths, and show that trainability is not always guaranteed.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kunal Sharma (35 papers)
  2. M. Cerezo (76 papers)
  3. Lukasz Cincio (87 papers)
  4. Patrick J. Coles (96 papers)
Citations (141)

Summary

We haven't generated a summary for this paper yet.