Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Differentially Private Adversarial Robustness Through Randomized Perturbations (2009.12718v1)

Published 27 Sep 2020 in cs.LG, cs.CR, and stat.ML

Abstract: Deep Neural Networks, despite their great success in diverse domains, are provably sensitive to small perturbations on correctly classified examples and lead to erroneous predictions. Recently, it was proposed that this behavior can be combatted by optimizing the worst case loss function over all possible substitutions of training examples. However, this can be prone to weighing unlikely substitutions higher, limiting the accuracy gain. In this paper, we study adversarial robustness through randomized perturbations, which has two immediate advantages: (1) by ensuring that substitution likelihood is weighted by the proximity to the original word, we circumvent optimizing the worst case guarantees and achieve performance gains; and (2) the calibrated randomness imparts differentially-private model training, which additionally improves robustness against adversarial attacks on the model outputs. Our approach uses a novel density-based mechanism based on truncated Gumbel noise, which ensures training on substitutions of both rare and dense words in the vocabulary while maintaining semantic similarity for model robustness.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Nan Xu (83 papers)
  2. Oluwaseyi Feyisetan (15 papers)
  3. Abhinav Aggarwal (20 papers)
  4. Zekun Xu (13 papers)
  5. Nathanael Teissier (7 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.