Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neurotoxin: Durable Backdoors in Federated Learning (2206.10341v1)

Published 12 Jun 2022 in cs.CR, cs.AI, and cs.LG

Abstract: Due to their decentralized nature, federated learning (FL) systems have an inherent vulnerability during their training to adversarial backdoor attacks. In this type of attack, the goal of the attacker is to use poisoned updates to implant so-called backdoors into the learned model such that, at test time, the model's outputs can be fixed to a given target for certain inputs. (As a simple toy example, if a user types "people from New York" into a mobile keyboard app that uses a backdoored next word prediction model, then the model could autocomplete the sentence to "people from New York are rude"). Prior work has shown that backdoors can be inserted into FL models, but these backdoors are often not durable, i.e., they do not remain in the model after the attacker stops uploading poisoned updates. Thus, since training typically continues progressively in production FL systems, an inserted backdoor may not survive until deployment. Here, we propose Neurotoxin, a simple one-line modification to existing backdoor attacks that acts by attacking parameters that are changed less in magnitude during training. We conduct an exhaustive evaluation across ten natural language processing and computer vision tasks, and we find that we can double the durability of state of the art backdoors.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhengming Zhang (11 papers)
  2. Ashwinee Panda (19 papers)
  3. Linyue Song (2 papers)
  4. Yaoqing Yang (49 papers)
  5. Michael W. Mahoney (233 papers)
  6. Joseph E. Gonzalez (167 papers)
  7. Kannan Ramchandran (129 papers)
  8. Prateek Mittal (129 papers)
Citations (105)