Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-supervised Adversarial Training (1911.06470v2)

Published 15 Nov 2019 in cs.LG and cs.CV

Abstract: Recent work has demonstrated that neural networks are vulnerable to adversarial examples. To escape from the predicament, many works try to harden the model in various ways, in which adversarial training is an effective way which learns robust feature representation so as to resist adversarial attacks. Meanwhile, the self-supervised learning aims to learn robust and semantic embedding from data itself. With these views, we introduce self-supervised learning to against adversarial examples in this paper. Specifically, the self-supervised representation coupled with k-Nearest Neighbour is proposed for classification. To further strengthen the defense ability, self-supervised adversarial training is proposed, which maximizes the mutual information between the representations of original examples and the corresponding adversarial examples. Experimental results show that the self-supervised representation outperforms its supervised version in respect of robustness and self-supervised adversarial training can further improve the defense ability efficiently.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Kejiang Chen (40 papers)
  2. Hang Zhou (166 papers)
  3. Yuefeng Chen (44 papers)
  4. Xiaofeng Mao (35 papers)
  5. Yuhong Li (33 papers)
  6. Yuan He (156 papers)
  7. Hui Xue (109 papers)
  8. Weiming Zhang (135 papers)
  9. Nenghai Yu (173 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.