Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Expressivity and Training of Deep Neural Networks: toward the Edge of Chaos? (1910.04970v2)

Published 11 Oct 2019 in cs.LG, cs.NE, and stat.ML

Abstract: Expressivity is one of the most significant issues in assessing neural networks. In this paper, we provide a quantitative analysis of the expressivity for the deep neural network (DNN) from its dynamic model, where the Hilbert space is employed to analyze the convergence and criticality. We study the feature mapping of several widely used activation functions obtained by Hermite polynomials, and find sharp declines or even saddle points in the feature space, which stagnate the information transfer in DNNs. We then present a new activation function design based on the Hermite polynomials for better utilization of spatial representation. Moreover, we analyze the information transfer of DNNs, emphasizing the convergence problem caused by the mismatch between input and topological structure. We also study the effects of input perturbations and regularization operators on critical expressivity. Our theoretical analysis reveals that DNNs use spatial domains for information representation and evolve to the edge of chaos as depth increases. In actual training, whether a particular network can ultimately arrive the edge of chaos depends on its ability to overcome convergence and pass information to the required network depth. Finally, we demonstrate the empirical performance of the proposed hypothesis via multivariate time series prediction and image classification examples.

Citations (6)

Summary

We haven't generated a summary for this paper yet.