Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large-Scale Optical Reservoir Computing for Spatiotemporal Chaotic Systems Prediction (2001.09131v3)

Published 24 Jan 2020 in physics.optics and cs.ET

Abstract: Reservoir computing is a relatively recent computational paradigm that originates from a recurrent neural network and is known for its wide range of implementations using different physical technologies. Large reservoirs are very hard to obtain in conventional computers, as both the computation complexity and memory usage grow quadratically. We propose an optical scheme performing reservoir computing over very large networks potentially being able to host several millions of fully connected photonic nodes thanks to its intrinsic properties of parallelism and scalability. Our experimental studies confirm that, in contrast to conventional computers, the computation time of our optical scheme is only linearly dependent on the number of photonic nodes of the network, which is due to electronic overheads, while the optical part of computation remains fully parallel and independent of the reservoir size. To demonstrate the scalability of our optical scheme, we perform for the first time predictions on large spatiotemporal chaotic datasets obtained from the Kuramoto-Sivashinsky equation using optical reservoirs with up to 50 000 optical nodes. Our results are extremely challenging for conventional von Neumann machines, and they significantly advance the state of the art of unconventional reservoir computing approaches, in general.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mushegh Rafayelyan (8 papers)
  2. Jonathan Dong (32 papers)
  3. Yongqi Tan (4 papers)
  4. Florent Krzakala (179 papers)
  5. Sylvain Gigan (113 papers)
Citations (138)

Summary

We haven't generated a summary for this paper yet.