Neuroevolving Electronic Dynamical Networks
Abstract: Neuroevolution is a powerful method of applying an evolutionary algorithm to refine the performance of artificial neural networks through natural selection; however, the fitness evaluation of these networks can be time-consuming and computationally expensive, particularly for continuous time recurrent neural networks (CTRNNs) that necessitate the simulation of differential equations. To overcome this challenge, field programmable gate arrays (FPGAs) have emerged as an increasingly popular solution, due to their high performance and low power consumption. Further, their ability to undergo dynamic and partial reconfiguration enables the extremely rapid evaluation of the fitness of CTRNNs, effectively addressing the bottleneck associated with conventional methods of evolvable hardware. By incorporating fitness evaluation directly upon the programmable logic of the FPGA, hyper-parallel evaluation becomes feasible, dramatically reducing the time required for assessment. This inherent parallelism of FPGAs accelerates the entire neuroevolutionary process by several orders of magnitude, facilitating faster convergence to an optimal solution. The work presented in this study demonstrates the potential of utilizing dynamic and partial reconfiguration on capable FPGAs as a powerful platform for neuroevolving dynamic neural networks.
- Beer, R. D. (1995). A dynamical systems perspective on agent-environment interaction. Artificial Intelligence, 72(1):173–215.
- Learning long-term dependencies with gradient descent is difficult.
- Beyond peak performance: Comparing the real performance of ai-optimized fpgas and gpus. In 2020 International Conference on Field-Programmable Technology (ICFPT), pages 10–19. IEEE.
- Pattern recognition in a bucket.
- Neuroevolution: from architectures to learning.
- Funahashi and Nakamura (1993). Original contribution: Approximation of dynamical systems by continuous time recurrent neural networks. Neural Networks, 6:801–806.
- A survey of fpga-based neural network accelerator. arXiv preprint arXiv:1712.08934.
- A fast learning algorithm for deep belief nets.
- Long short-term memory.
- Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities.
- Imagenet classification with deep convolutional neural networks.
- Gradient-based learning applied to document recognition.
- Sparse deep belief net model for visual area v2.
- Evolving neural networks through augmenting topologies.
- Xilinx (2020). Xilinx ultrascale architecture configurable logic block user guide. Technical Report UG574, Xilinx Inc. Accessed on 26 August 2020.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.