Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Training Makes Weight Loss Landscape Sharper in Logistic Regression (2102.02950v1)

Published 5 Feb 2021 in stat.ML, cs.AI, and cs.LG

Abstract: Adversarial training is actively studied for learning robust models against adversarial examples. A recent study finds that adversarially trained models degenerate generalization performance on adversarial examples when their weight loss landscape, which is loss changes with respect to weights, is sharp. Unfortunately, it has been experimentally shown that adversarial training sharpens the weight loss landscape, but this phenomenon has not been theoretically clarified. Therefore, we theoretically analyze this phenomenon in this paper. As a first step, this paper proves that adversarial training with the L2 norm constraints sharpens the weight loss landscape in the linear logistic regression model. Our analysis reveals that the sharpness of the weight loss landscape is caused by the noise aligned in the direction of increasing the loss, which is used in adversarial training. We theoretically and experimentally confirm that the weight loss landscape becomes sharper as the magnitude of the noise of adversarial training increases in the linear logistic regression model. Moreover, we experimentally confirm the same phenomena in ResNet18 with softmax as a more general case.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Masanori Yamada (15 papers)
  2. Sekitoshi Kanai (18 papers)
  3. Tomoharu Iwata (64 papers)
  4. Tomokatsu Takahashi (3 papers)
  5. Yuki Yamanaka (7 papers)
  6. Hiroshi Takahashi (12 papers)
  7. Atsutoshi Kumagai (22 papers)
Citations (8)