2000 character limit reached
Deepest Neural Networks (1707.02617v1)
Published 9 Jul 2017 in cs.NE and cs.LG
Abstract: This paper shows that a long chain of perceptrons (that is, a multilayer perceptron, or MLP, with many hidden layers of width one) can be a universal classifier. The classification procedure is not necessarily computationally efficient, but the technique throws some light on the kind of computations possible with narrow and deep MLPs.