Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 81 tok/s
Gemini 2.5 Pro 57 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 104 tok/s Pro
GPT OSS 120B 460 tok/s Pro
Kimi K2 216 tok/s Pro
2000 character limit reached

A Quantum Model for Multilayer Perceptron (1808.10561v2)

Published 31 Aug 2018 in quant-ph

Abstract: Multilayer perceptron is the most common used class of feed-forward artificial neural network. It contains many applications in diverse fields such as speech recognition, image recognition, and machine translation software. To cater for the fast development of quantum machine learning, in this paper, we propose a new model to study multilayer perceptron in quantum computer. This contains the tasks to prepare the quantum state of the output signal in each layer and to establish the quantum version of learning algorithm about the weights in each layer. We will show that the corresponding quantum versions can achieve at least quadratic speedup or even exponential speedup over the classical algorithms. This provide us an efficient method to study multilayer perceptron and its applications in machine learning in quantum computer. Finally, as an inspiration, an exponential fast learning algorithm (based on Hebb's learning rule) of Hopfield network will be proposed.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)