Papers
Topics
Authors
Recent
Search
2000 character limit reached

Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks

Published 23 Sep 2018 in cs.LG, cs.NE, math.OC, and stat.ML | (1809.08587v4)

Abstract: We study the dynamics of gradient descent on objective functions of the form $f(\prod_{i=1}{k} w_i)$ (with respect to scalar parameters $w_1,\ldots,w_k$), which arise in the context of training depth-$k$ linear neural networks. We prove that for standard random initializations, and under mild assumptions on $f$, the number of iterations required for convergence scales exponentially with the depth $k$. We also show empirically that this phenomenon can occur in higher dimensions, where each $w_i$ is a matrix. This highlights a potential obstacle in understanding the convergence of gradient-based methods for deep linear neural networks, where $k$ is large.

Citations (43)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.