Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Generalized Neural Tangent Kernel Analysis for Two-layer Neural Networks (2002.04026v2)

Published 10 Feb 2020 in cs.LG, math.OC, and stat.ML

Abstract: A recent breakthrough in deep learning theory shows that the training of over-parameterized deep neural networks can be characterized by a kernel function called \textit{neural tangent kernel} (NTK). However, it is known that this type of results does not perfectly match the practice, as NTK-based analysis requires the network weights to stay very close to their initialization throughout training, and cannot handle regularizers or gradient noises. In this paper, we provide a generalized neural tangent kernel analysis and show that noisy gradient descent with weight decay can still exhibit a "kernel-like" behavior. This implies that the training loss converges linearly up to a certain accuracy. We also establish a novel generalization error bound for two-layer neural networks trained by noisy gradient descent with weight decay.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Zixiang Chen (28 papers)
  2. Yuan Cao (201 papers)
  3. Quanquan Gu (198 papers)
  4. Tong Zhang (569 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.