Items or Relations -- what do Artificial Neural Networks learn? (2404.12401v1)
Abstract: What has an Artificial Neural Network (ANN) learned after being successfully trained to solve a task - the set of training items or the relations between them? This question is difficult to answer for modern applied ANNs because of their enormous size and complexity. Therefore, here we consider a low-dimensional network and a simple task, i.e., the network has to reproduce a set of training items identically. We construct the family of solutions analytically and use standard learning algorithms to obtain numerical solutions. These numerical solutions differ depending on the optimization algorithm and the weight initialization and are shown to be particular members of the family of analytical solutions. In this simple setting, we observe that the general structure of the network weights represents the training set's symmetry group, i.e., the relations between training items. As a consequence, linear networks generalize, i.e., reproduce items that were not part of the training set but are consistent with the symmetry of the training set. In contrast, non-linear networks tend to learn individual training items and show associative memory. At the same time, their ability to generalize is limited. A higher degree of generalization is obtained for networks whose activation function contains a linear regime, such as tanh. Our results suggest ANN's ability to generalize - instead of learning items - could be improved by generating a sufficiently big set of elementary operations to represent relations and strongly depends on the applied non-linearity.
- Artificial neural networks for neuroscientists: a primer. Neuron, 107(6):1048–1070, 2020.
- On the nature and use of models in network neuroscience. Nature Reviews Neuroscience, 19(9):566–578, 2018.
- Omri Barak. Recurrent neural networks as versatile tools of neuroscience research. Current opinion in neurobiology, 46:1–6, 2017.
- Olaf Sporns. Contributions and challenges for network models in cognitive neuroscience. Nature neuroscience, 17(5):652–660, 2014.
- Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural computation, 25(3):626–649, 2013.
- Context-dependent computation by recurrent dynamics in prefrontal cortex. nature, 503(7474):78–84, 2013.
- Dmitry Krotov. A new frontier for hopfield networks. Nature Reviews Physics, pages 1–2, 2023.
- John J Hopfield. Brain, neural networks, and computation. Reviews of modern physics, 71(2):S431, 1999.
- Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014.
- Improving generalization performance using double backpropagation. IEEE transactions on neural networks, 3(6):991–997, 1992.
- A simple weight decay can improve generalization. Advances in neural information processing systems, 4, 1991.
- Stefan Reimann. On the design of artificial auto-associative neuronal networks. Neural Networks, 11(4):611–621, 1998.
- Groups and geometry. Oxford University Press, USA, 1994.
- On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139–1147. PMLR, 2013.
- Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.