Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When Will Gradient Methods Converge to Max-margin Classifier under ReLU Models? (1806.04339v2)

Published 12 Jun 2018 in cs.LG and stat.ML

Abstract: We study the implicit bias of gradient descent methods in solving a binary classification problem over a linearly separable dataset. The classifier is described by a nonlinear ReLU model and the objective function adopts the exponential loss function. We first characterize the landscape of the loss function and show that there can exist spurious asymptotic local minima besides asymptotic global minima. We then show that gradient descent (GD) can converge to either a global or a local max-margin direction, or may diverge from the desired max-margin direction in a general context. For stochastic gradient descent (SGD), we show that it converges in expectation to either the global or the local max-margin direction if SGD converges. We further explore the implicit bias of these algorithms in learning a multi-neuron network under certain stationary conditions, and show that the learned classifier maximizes the margins of each sample pattern partition under the ReLU activation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tengyu Xu (27 papers)
  2. Yi Zhou (438 papers)
  3. Kaiyi Ji (43 papers)
  4. Yingbin Liang (140 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.