Papers
Topics
Authors
Recent
Search
2000 character limit reached

Complexity for deep neural networks and other characteristics of deep feature representations

Published 8 Jun 2020 in cs.LG, cond-mat.dis-nn, cs.NE, hep-th, and stat.ML | (2006.04791v2)

Abstract: We define a notion of complexity, which quantifies the nonlinearity of the computation of a neural network, as well as a complementary measure of the effective dimension of feature representations. We investigate these observables both for trained networks for various datasets as well as explore their dynamics during training, uncovering in particular power law scaling. These observables can be understood in a dual way as uncovering hidden internal structure of the datasets themselves as a function of scale or depth. The entropic character of the proposed notion of complexity should allow to transfer modes of analysis from neuroscience and statistical physics to the domain of artificial neural networks. The introduced observables can be applied without any change to the analysis of biological neuronal systems.

Citations (4)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.