Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Silicon Photonic Architecture for Training Deep Neural Networks with Direct Feedback Alignment (2111.06862v2)

Published 12 Nov 2021 in cs.LG, physics.app-ph, and physics.optics

Abstract: There has been growing interest in using photonic processors for performing neural network inference operations; however, these networks are currently trained using standard digital electronics. Here, we propose on-chip training of neural networks enabled by a CMOS-compatible silicon photonic architecture to harness the potential for massively parallel, efficient, and fast data operations. Our scheme employs the direct feedback alignment training algorithm, which trains neural networks using error feedback rather than error backpropagation, and can operate at speeds of trillions of multiply-accumulate (MAC) operations per second while consuming less than one picojoule per MAC operation. The photonic architecture exploits parallelized matrix-vector multiplications using arrays of microring resonators for processing multi-channel analog signals along single waveguide buses to calculate the gradient vector for each neural network layer in situ. We also experimentally demonstrate training deep neural networks with the MNIST dataset using on-chip MAC operation results. Our novel approach for efficient, ultra-fast neural network training showcases photonics as a promising platform for executing AI applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Matthew J. Filipovich (6 papers)
  2. Zhimu Guo (3 papers)
  3. Mohammed Al-Qadasi (3 papers)
  4. Bicky A. Marquez (7 papers)
  5. Hugh D. Morison (1 paper)
  6. Volker J. Sorger (90 papers)
  7. Paul R. Prucnal (30 papers)
  8. Sudip Shekhar (19 papers)
  9. Bhavin J. Shastri (42 papers)
Citations (49)

Summary

We haven't generated a summary for this paper yet.