Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

To be Robust or to be Fair: Towards Fairness in Adversarial Training (2010.06121v2)

Published 13 Oct 2020 in cs.LG and stat.ML

Abstract: Adversarial training algorithms have been proved to be reliable to improve machine learning models' robustness against adversarial examples. However, we find that adversarial training algorithms tend to introduce severe disparity of accuracy and robustness between different groups of data. For instance, a PGD adversarially trained ResNet18 model on CIFAR-10 has 93% clean accuracy and 67% PGD l-infty-8 robust accuracy on the class "automobile" but only 65% and 17% on the class "cat". This phenomenon happens in balanced datasets and does not exist in naturally trained models when only using clean samples. In this work, we empirically and theoretically show that this phenomenon can happen under general adversarial training algorithms which minimize DNN models' robust errors. Motivated by these findings, we propose a Fair-Robust-Learning (FRL) framework to mitigate this unfairness problem when doing adversarial defenses. Experimental results validate the effectiveness of FRL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Han Xu (92 papers)
  2. Xiaorui Liu (50 papers)
  3. Yaxin Li (27 papers)
  4. Anil K. Jain (92 papers)
  5. Jiliang Tang (204 papers)
Citations (158)

Summary

We haven't generated a summary for this paper yet.