Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reservoir-size dependent learning in analogue neural networks (1908.08021v1)

Published 23 Jul 2019 in cs.NE, cs.ET, cs.LG, and stat.ML

Abstract: The implementation of artificial neural networks in hardware substrates is a major interdisciplinary enterprise. Well suited candidates for physical implementations must combine nonlinear neurons with dedicated and efficient hardware solutions for both connectivity and training. Reservoir computing addresses the problems related with the network connectivity and training in an elegant and efficient way. However, important questions regarding impact of reservoir size and learning routines on the convergence-speed during learning remain unaddressed. Here, we study in detail the learning process of a recently demonstrated photonic neural network based on a reservoir. We use a greedy algorithm to train our neural network for the task of chaotic signals prediction and analyze the learning-error landscape. Our results unveil fundamental properties of the system's optimization hyperspace. Particularly, we determine the convergence speed of learning as a function of reservoir size and find exceptional, close to linear scaling. This linear dependence, together with our parallel diffractive coupling, represent optimal scaling conditions for our photonic neural network scheme.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xavier Porte (22 papers)
  2. Louis Andreoli (7 papers)
  3. Maxime Jacquot (11 papers)
  4. Laurent Larger (21 papers)
  5. Daniel Brunner (48 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.