Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Politics of Adversarial Machine Learning (2002.05648v3)

Published 1 Feb 2020 in cs.CY, cs.CR, cs.LG, and stat.ML

Abstract: In addition to their security properties, adversarial machine-learning attacks and defenses have political dimensions. They enable or foreclose certain options for both the subjects of the machine learning systems and for those who deploy them, creating risks for civil liberties and human rights. In this paper, we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems. To make this concrete, we use real-world examples of how attacks such as perturbation, model inversion, or membership inference can be used for socially desirable ends. Although the predictions of this analysis may seem dire, there is hope. Efforts to address human rights concerns in the commercial spyware industry provide guidance for similar measures to ensure ML systems serve democratic, not authoritarian ends

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kendra Albert (8 papers)
  2. Jonathon Penney (3 papers)
  3. Bruce Schneier (9 papers)
  4. Ram Shankar Siva Kumar (14 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.