Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks (2107.10302v2)

Published 11 Jul 2021 in cs.CR, cs.CY, and cs.LG

Abstract: Attacks from adversarial ML have the potential to be used "for good": they can be used to run counter to the existing power structures within ML, creating breathing space for those who would otherwise be the targets of surveillance and control. But most research on adversarial ML has not engaged in developing tools for resistance against ML systems. Why? In this paper, we review the broader impact statements that adversarial ML researchers wrote as part of their NeurIPS 2020 papers and assess the assumptions that authors have about the goals of their work. We also collect information about how authors view their work's impact more generally. We find that most adversarial ML researchers at NeurIPS hold two fundamental assumptions that will make it difficult for them to consider socially beneficial uses of attacks: (1) it is desirable to make systems robust, independent of context, and (2) attackers of systems are normatively bad and defenders of systems are normatively good. That is, despite their expressed and supposed neutrality, most adversarial ML researchers believe that the goal of their work is to secure systems, making it difficult to conceptualize and build tools for disrupting the status quo.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kendra Albert (8 papers)
  2. Maggie Delano (5 papers)
  3. Bogdan Kulynych (16 papers)
  4. Ram Shankar Siva Kumar (14 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.