Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neuromodulated Learning in Deep Neural Networks (1812.03365v1)

Published 5 Dec 2018 in cs.NE, cs.LG, and stat.ML

Abstract: In the brain, learning signals change over time and synaptic location, and are applied based on the learning history at the synapse, in the complex process of neuromodulation. Learning in artificial neural networks, on the other hand, is shaped by hyper-parameters set before learning starts, which remain static throughout learning, and which are uniform for the entire network. In this work, we propose a method of deep artificial neuromodulation which applies the concepts of biological neuromodulation to stochastic gradient descent. Evolved neuromodulatory dynamics modify learning parameters at each layer in a deep neural network over the course of the network's training. We show that the same neuromodulatory dynamics can be applied to different models and can scale to new problems not encountered during evolution. Finally, we examine the evolved neuromodulation, showing that evolution found dynamic, location-specific learning strategies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Dennis G Wilson (19 papers)
  2. Sylvain Cussat-Blanc (7 papers)
  3. Hervé Luga (8 papers)
  4. Kyle Harrington (3 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.