Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A universal approximation theorem for nonlinear resistive networks (2312.15063v3)

Published 22 Dec 2023 in cs.LG and cond-mat.dis-nn

Abstract: Resistor networks have recently been studied as analog computing platforms for machine learning, particularly due to their compatibility with the Equilibrium Propagation training framework. In this work, we explore the computational capabilities of these networks. We prove that electrical networks consisting of voltage sources, linear resistors, diodes, and voltage-controlled voltage sources (VCVSs) can approximate any continuous function to arbitrary precision. Central to our proof is a method for translating a neural network with rectified linear units into an approximately equivalent electrical network comprising these four elements. Our proof relies on two assumptions: (a) that circuit elements are ideal, and (b) that variable resistor conductances and VCVS amplification factors can take any value (arbitrarily small or large). Our findings provide insights that could guide the development of universal self-learning electrical networks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. Frequency propagation: Multi-mechanism learning in nonlinear physical networks. arXiv preprint arXiv:2208.08862, 2022.
  2. G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303–314, 1989.
  3. Demonstration of decentralized physics-driven learning. Physical Review Applied, 18(1):014040, 2022.
  4. Circuits that train themselves: decentralized, physics-driven learning. In AI and Optical Data Sciences IV, volume 12438, pages 115–117. SPIE, 2023.
  5. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 315–323, 2011.
  6. Resistive fuses: Analog hardware for detecting discontinuities in early vision. In Analog VLSI implementation of neural systems, pages 27–55. Springer, 1989.
  7. J. J. Hopfield. Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the national academy of sciences, 81(10):3088–3092, 1984.
  8. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359–366, 1989.
  9. Training end-to-end analog neural networks with equilibrium propagation. arXiv preprint arXiv:2006.01981, 2020.
  10. Impacts of feedback current value and learning rate on equilibrium propagation performance. In 2022 20th IEEE Interregional NEWCAS Conference (NEWCAS), pages 519–523. IEEE, 2022.
  11. A. Laborieux and F. Zenke. Holomorphic equilibrium propagation computes exact gradients through finite size oscillations. arXiv preprint arXiv:2209.00530, 2022.
  12. Scaling equilibrium propagation to deep convnets by drastically reducing its gradient estimator bias. Frontiers in neuroscience, 15:129, 2021.
  13. V. Lopez-Pastor and F. Marquardt. Self-learning machines based on hamiltonian echo backpropagation. Physical Review X, 13(3):031020, 2023.
  14. Memristor crossbar circuits implementing equilibrium propagation for on-device learning. Micromachines, 14(7):1367, 2023.
  15. B. Scellier. A deep learning theory for neural networks grounded in physics. PhD thesis, Université de Montréal, 2021.
  16. B. Scellier and Y. Bengio. Equilibrium propagation: Bridging the gap between energy-based models and backpropagation. Frontiers in computational neuroscience, 11:24, 2017.
  17. Energy-based learning algorithms: A comparative study. In ICML Workshop on Localized Learning (LLW), 2023.
  18. M. Stern and A. Murugan. Learning without neurons in physical systems. Annual Review of Condensed Matter Physics, 14:417–441, 2023.
  19. Physical learning beyond the quasistatic limit. Physical Review Research, 4(2):L022037, 2022.
  20. Physical learning of power-efficient solutions. arXiv preprint arXiv:2310.10437, 2023.
  21. Energy-based analog neural network framework. Frontiers in Computational Neuroscience, 17:1114651, 2023.
  22. Deep physical neural networks trained with backpropagation. Nature, 601(7894):549–555, 2022.
  23. Desynchronous learning in a physics-driven learning network. The Journal of Chemical Physics, 156(14), 2022.
  24. Q. Xia and J. J. Yang. Memristive crossbar arrays for brain-inspired computing. Nature materials, 18(4):309–323, 2019.
  25. X. Xie and H. S. Seung. Equivalence of backpropagation and contrastive hebbian learning in a layered network. Neural computation, 15(2):441–454, 2003.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com