Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Boolean learning under noise-perturbations in hardware neural networks (2003.12319v2)

Published 27 Mar 2020 in cs.NE and cs.LG

Abstract: A high efficiency hardware integration of neural networks benefits from realizing nonlinearity, network connectivity and learning fully in a physical substrate. Multiple systems have recently implemented some or all of these operations, yet the focus was placed on addressing technological challenges. Fundamental questions regarding learning in hardware neural networks remain largely unexplored. Noise in particular is unavoidable in such architectures, and here we investigate its interaction with a learning algorithm using an opto-electronic recurrent neural network. We find that noise strongly modifies the system's path during convergence, and surprisingly fully decorrelates the final readout weight matrices. This highlights the importance of understanding architecture, noise and learning algorithm as interacting players, and therefore identifies the need for mathematical tools for noisy, analogue system optimization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Louis Andreoli (7 papers)
  2. Xavier Porte (22 papers)
  3. Stéphane Chrétien (30 papers)
  4. Maxime Jacquot (11 papers)
  5. Laurent Larger (21 papers)
  6. Daniel Brunner (48 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.