Papers
Topics
Authors
Recent
Search
2000 character limit reached

Neuroevolving Electronic Dynamical Networks

Published 6 Apr 2024 in cs.NE, cs.AI, and cs.AR | (2404.04587v2)

Abstract: Neuroevolution is a powerful method of applying an evolutionary algorithm to refine the performance of artificial neural networks through natural selection; however, the fitness evaluation of these networks can be time-consuming and computationally expensive, particularly for continuous time recurrent neural networks (CTRNNs) that necessitate the simulation of differential equations. To overcome this challenge, field programmable gate arrays (FPGAs) have emerged as an increasingly popular solution, due to their high performance and low power consumption. Further, their ability to undergo dynamic and partial reconfiguration enables the extremely rapid evaluation of the fitness of CTRNNs, effectively addressing the bottleneck associated with conventional methods of evolvable hardware. By incorporating fitness evaluation directly upon the programmable logic of the FPGA, hyper-parallel evaluation becomes feasible, dramatically reducing the time required for assessment. This inherent parallelism of FPGAs accelerates the entire neuroevolutionary process by several orders of magnitude, facilitating faster convergence to an optimal solution. The work presented in this study demonstrates the potential of utilizing dynamic and partial reconfiguration on capable FPGAs as a powerful platform for neuroevolving dynamic neural networks.

Authors (1)
Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. Beer, R. D. (1995). A dynamical systems perspective on agent-environment interaction. Artificial Intelligence, 72(1):173–215.
  2. Learning long-term dependencies with gradient descent is difficult.
  3. Beyond peak performance: Comparing the real performance of ai-optimized fpgas and gpus. In 2020 International Conference on Field-Programmable Technology (ICFPT), pages 10–19. IEEE.
  4. Pattern recognition in a bucket.
  5. Neuroevolution: from architectures to learning.
  6. Funahashi and Nakamura (1993). Original contribution: Approximation of dynamical systems by continuous time recurrent neural networks. Neural Networks, 6:801–806.
  7. A survey of fpga-based neural network accelerator. arXiv preprint arXiv:1712.08934.
  8. A fast learning algorithm for deep belief nets.
  9. Long short-term memory.
  10. Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities.
  11. Imagenet classification with deep convolutional neural networks.
  12. Gradient-based learning applied to document recognition.
  13. Sparse deep belief net model for visual area v2.
  14. Evolving neural networks through augmenting topologies.
  15. Xilinx (2020). Xilinx ultrascale architecture configurable logic block user guide. Technical Report UG574, Xilinx Inc. Accessed on 26 August 2020.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.