Papers
Topics
Authors
Recent
2000 character limit reached

Influencer Backdoor Attack on Semantic Segmentation (2303.12054v5)

Published 21 Mar 2023 in cs.CV

Abstract: When a small number of poisoned samples are injected into the training dataset of a deep neural network, the network can be induced to exhibit malicious behavior during inferences, which poses potential threats to real-world applications. While they have been intensively studied in classification, backdoor attacks on semantic segmentation have been largely overlooked. Unlike classification, semantic segmentation aims to classify every pixel within a given image. In this work, we explore backdoor attacks on segmentation models to misclassify all pixels of a victim class by injecting a specific trigger on non-victim pixels during inferences, which is dubbed Influencer Backdoor Attack (IBA). IBA is expected to maintain the classification accuracy of non-victim pixels and mislead classifications of all victim pixels in every single inference and could be easily applied to real-world scenes. Based on the context aggregation ability of segmentation models, we proposed a simple, yet effective, Nearest-Neighbor trigger injection strategy. We also introduce an innovative Pixel Random Labeling strategy which maintains optimal performance even when the trigger is placed far from the victim pixels. Our extensive experiments reveal that current segmentation models do suffer from backdoor attacks, demonstrate IBA real-world applicability, and show that our proposed techniques can further increase attack performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (63)
  1. On the robustness of semantic segmentation models to adversarial attacks. In CVPR, 2018.
  2. Proflip: Targeted trojan attack with progressive bit flips. In ICCV, 2021.
  3. Rethinking atrous convolution for semantic image segmentation. arXiv:1706.05587, 2017a.
  4. Quarantine: Sparsity can uncover the trojan attack trigger for free. In CVPR, 2022a.
  5. Effective backdoor defense by exploiting sensitivity of poisoned samples. In NeurIPS, 2022b.
  6. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:1712.05526, 2017b.
  7. The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016.
  8. Februus: Input purification defense against trojan attacks on deep neural network systems. In ACSAC, 2020.
  9. Black-box detection of backdoor attacks with limited information and data. In ICCV, 2021.
  10. The pascal visual object classes (voc) challenge. IJCV, 2010.
  11. Adversarial examples for semantic image segmentation. arXiv:1703.01101, 2017.
  12. Backdoor attack on hash-based image retrieval via clean-label data poisoning. arXiv:2109.08868, 2021.
  13. Backdoor defense via adaptively splitting poisoned dataset. In CVPR, 2023.
  14. Strip: A defence against trojan attacks on deep neural networks. In ACSAC, 2019.
  15. Anti-distillation backdoor attacks: Backdoors can really survive in knowledge distillation. In ACMMM, 2021.
  16. Effective and efficient vote attack on capsule networks. In ICLR, 2021a.
  17. Adversarial examples on segmentation models can be easy to transfer. In arXiv:2111.11368, 2021b.
  18. Segpgd: An effective and efficient adversarial attack for evaluating and boosting segmentation robustness. In ECCV, 2022.
  19. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv:1708.06733, 2017.
  20. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 2019.
  21. Few-shot backdoor defense using shapley estimation. In CVPR, 2022.
  22. Semantic contours from inverse detectors. In ICCV, 2011.
  23. Deep residual learning for image recognition. In CVPR, 2016.
  24. Universal adversarial perturbations against semantic image segmentation. In ICCV, 2017.
  25. Adversarial attacks for image segmentation on multiple lightweight models. IEEE Access, 2020.
  26. Universal litmus patterns: Revealing backdoor attacks in cnns. In CVPR, 2020.
  27. Weight poisoning attacks on pre-trained models. arXiv:2004.06660, 2020.
  28. Neural attention distillation: Erasing backdoor triggers from deep neural networks. In ICLR, 2021a.
  29. Hidden backdoor attack against semantic segmentation models. arXiv:2103.04038, 2021b.
  30. Backdoor learning: A survey. TNNLS, 2022.
  31. Deeppayload: Black-box backdoor attack on deep learning models through neural payload injection. In ICSE, 2021c.
  32. Invisible backdoor attack with sample-specific triggers. In ICCV, 2021d.
  33. Backdoor embedding in convolutional neural network models via invisible perturbation. arXiv:1808.10307, 2018.
  34. Fine-pruning: Defending against backdooring attacks on deep neural networks. In RAID, 2018.
  35. Does few-shot learning suffer from backdoor attacks? arXiv:2401.01377, 2023.
  36. Reflection backdoor: A natural backdoor attack on deep neural networks. In ECCV, 2020.
  37. Neural trojans. In ICCD, 2017.
  38. Object-free backdoor attack and defense on semantic segmentation. Computers & Security, 2023.
  39. Subnet replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting. arXiv:2107.07240, 2021.
  40. Tbt: Targeted neural network attack with bit trojan. In CVPR, 2020.
  41. Imagenet large scale visual recognition challenge. IJCV, 2015.
  42. Hidden trigger backdoor attacks. In AAAI, 2020.
  43. Poison frogs! targeted clean-label poisoning attacks on neural networks. In NeurIPS, 2018.
  44. Intriguing properties of neural networks. arXiv:1312.6199, 2013.
  45. An embarrassingly simple approach for trojan attack in deep neural networks. In SIGKDD, 2020.
  46. Better trigger inversion optimization in backdoor scanning. In CVPR, 2022.
  47. Spectral signatures in backdoor attacks. In NeurIPS, 2018a.
  48. Spectral signatures in backdoor attacks. In NeurIPS, 2018b.
  49. Label-consistent backdoor attacks. arXiv:1912.02771, 2019.
  50. Model agnostic defence against backdoor attacks in machine learning. IEEE Transactions on Reliability, 2022.
  51. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In IEEE Symposium on Security and Privacy, 2019.
  52. Backdoor attacks against transfer learning with pre-trained deep learning models. IEEE Transactions on Services Computing, 2020.
  53. Rab: Provable robustness against backdoor attacks. In IEEE Symposium on Security and Privacy, 2022.
  54. Towards efficient adversarial training on vision transformers. In ECCV, 2022.
  55. Adversarial neuron pruning purifies backdoored deep models. In NeurIPS, 2021.
  56. Characterizing adversarial examples based on spatial consistency information for semantic segmentation. In ECCV, 2018.
  57. Adversarial examples for semantic segmentation and object detection. In ICCV, 2017.
  58. Segformer: Simple and efficient design for semantic segmentation with transformers. In NeurIPS, 2021.
  59. Latent backdoor attacks on deep neural networks. In ACMCCS, 2019.
  60. Adversarial unlearning of backdoors via implicit hypergradient. arXiv:2110.03735, 2021.
  61. Pyramid scene parsing network. In CVPR, 2017.
  62. Bridging mode connectivity in loss landscapes and adversarial robustness. In ICLR, 2020.
  63. Data-free backdoor removal based on channel lipschitzness. In ECCV, 2022.
Citations (5)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.