Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamics of Local Elasticity During Training of Neural Nets (2111.01166v3)

Published 1 Nov 2021 in cs.LG, math.DS, and stat.ML

Abstract: In the recent past, a property of neural training trajectories in weight-space had been isolated, that of "local elasticity" (denoted as $S_{\rm rel}$). Local elasticity attempts to quantify the propagation of the influence of a sampled data point on the prediction at another data. In this work, we embark on a comprehensive study of the existing notion of $S_{\rm rel}$ and also propose a new definition that addresses the limitations that we point out for the original definition in the classification setting. On various state-of-the-art neural network training on SVHN, CIFAR-10 and CIFAR-100 we demonstrate how our new proposal of $S_{\rm rel}$, as opposed to the original definition, much more sharply detects the property of the weight updates preferring to make prediction changes within the same class as the sampled data. In neural regression experiments we demonstrate that the original $S_{\rm rel}$ reveals a $2-$phase behavior -- that the training proceeds via an initial elastic phase when $S_{\rm rel}$ changes rapidly and an eventual inelastic phase when $S_{\rm rel}$ remains large. We show that some of these properties can be analytically reproduced in various instances of doing regression via gradient flows on model predictor classes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Soham Dan (41 papers)
  2. Anirbit Mukherjee (20 papers)
  3. Avirup Das (4 papers)
  4. Phanideep Gampa (7 papers)

Summary

We haven't generated a summary for this paper yet.