Papers
Topics
Authors
Recent
Search
2000 character limit reached

Malicious Agent Detection for Robust Multi-Agent Collaborative Perception

Published 18 Oct 2023 in cs.CR | (2310.11901v2)

Abstract: Recently, multi-agent collaborative (MAC) perception has been proposed and outperformed the traditional single-agent perception in many applications, such as autonomous driving. However, MAC perception is more vulnerable to adversarial attacks than single-agent perception due to the information exchange. The attacker can easily degrade the performance of a victim agent by sending harmful information from a malicious agent nearby. In this paper, we extend adversarial attacks to an important perception task -- MAC object detection, where generic defenses such as adversarial training are no longer effective against these attacks. More importantly, we propose Malicious Agent Detection (MADE), a reactive defense specific to MAC perception that can be deployed by each agent to accurately detect and then remove any potential malicious agent in its local collaboration network. In particular, MADE inspects each agent in the network independently using a semi-supervised anomaly detector based on a double-hypothesis test with the Benjamini-Hochberg procedure to control the false positive rate of the inference. For the two hypothesis tests, we propose a match loss statistic and a collaborative reconstruction loss statistic, respectively, both based on the consistency between the agent to be inspected and the ego agent where our detector is deployed. We conduct comprehensive evaluations on a benchmark 3D dataset V2X-sim and a real-road dataset DAIR-V2X and show that with the protection of MADE, the drops in the average precision compared with the best-case "oracle" defender against our attack are merely 1.28% and 0.34%, respectively, much lower than 8.92% and 10.00% for adversarial training, respectively.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (95)
  1. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. 2021.
  2. Learning-based multi-uav flocking control with limited visual field and instinctive repulsion. IEEE Transactions on Cybernetics, pages 1–14, 2023.
  3. Recent advances in adversarial training for adversarial robustness. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, 2021.
  4. A multi agent system architecture to implement collaborative learning for social industrial assets. IFAC-PapersOnLine, 51(11):1237–1242, 2018.
  5. Multimodal machine learning: A survey and taxonomy. IEEE Trans. Pattern Anal. Mach. Intell., 2019.
  6. A. Bendale and T.E. Boult. Towards open set deep networks. In Proc. CVPR, 2015.
  7. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B (Methodological), 1995.
  8. B. Biggio and F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317–331, 2018.
  9. Adversarial patch, 2017.
  10. N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57, May 2017.
  11. Distributed learning in wireless networks: Recent progress and future challenges. IEEE Journal on Selected Areas in Communications, 2021.
  12. Ead: Elastic-net attacks to deep neural networks via adversarial examples. In AAAI, 2018.
  13. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, page 15–26, 2017.
  14. F-cooper: Feature based cooperative perception for autonomous vehicle edge computing system using 3d point clouds. In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, pages 88–100, 2019.
  15. Certified adversarial robustness via randomized smoothing. In Proceedings of the 36th International Conference on Machine Learning, 2019.
  16. Carla: An open urban driving simulator. In Conference on robot learning, pages 1–16. PMLR, 2017.
  17. Robust physical-world attacks on machine learning models. In Proc. CVPR, 2018.
  18. Physical adversarial examples for object detectors. In Proceedings of the 12th USENIX Conference on Offensive Technologies, 2018.
  19. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410, 2017.
  20. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014.
  21. A review of non-maximum suppression algorithms for deep learning target detection. In Seventh Symposium on Novel Photoelectronic Detection Technology and Applications, volume 11763, page 1176332, 2021.
  22. Explaining and harnessing adversarial examples. In Proc. ICLR, 2015.
  23. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 7:47230–47244, 2019.
  24. F. R. Hampel. The influence curve and its role in robust estimation. Journal of the American Statistical Association, 69, 1974.
  25. Y. Hochberg and A. C. Tamhane. Multiple Comparison Procedures. John Wiley & Sons, Inc., USA, 1987.
  26. A new defense against adversarial images: Turning a weakness into a strength. volume 32, 2019.
  27. Naturalistic physical adversarial patch for object detectors. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021.
  28. Where2comm: Communication-efficient collaborative perception via spatial confidence maps. In Advances in Neural Information Processing Systems.
  29. Adversarial machine learning. In Proc. 4th ACM Workshop on Artificial Intelligence and Security (AISec), 2011.
  30. Adversarial YOLO: defense human detection patch attacks via detecting adversarial patches, 2021.
  31. Fooling detection alone is not enough: Adversarial attack against multiple object tracking. In International Conference on Learning Representations, 2020.
  32. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. In Proceedings of the 29th International Conference on Computer Aided Verification, Part I, 2017.
  33. Adversarial self-supervised contrastive learning. In Advances in Neural Information Processing Systems, 2020.
  34. Recent development and applications of sumo-simulation of urban mobility. International journal on advances in systems and measurements, 5(3&4), 2012.
  35. Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83–97, 1955.
  36. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12697–12705, 2019.
  37. Sok: Certified robustness for deep neural networks. In 44th IEEE Symposium on Security and Privacy, SP 2023, San Francisco, CA, USA, 22-26 May 2023. IEEE, 2023.
  38. Learning distilled collaboration graph for multi-agent perception. In Advances in Neural Information Processing Systems, 2021.
  39. Exploring the vulnerability of single shot module in object detectors via imperceptible background patches. In British Machine Vision Conference, 2018.
  40. We can always catch you: Detecting adversarial patched objects WITH or WITHOUT signature, 2021.
  41. Phillip Lippe. Tutorial 9: Deep Autoencoders. https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial9/AE_CIFAR10.ipynb. [Online; accessed 14-Oct-2023].
  42. Dpatch: Attacking object detectors with adversarial patches. In AAAI Workshop on Artificial Intelligence Safety, 2018.
  43. Trojaning attack on neural networks. In Proc. NDSS, San Diego, CA, 2018.
  44. When2com: Multi-agent perception via communication graph grouping. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pages 4106–4115, 2020.
  45. DeepFool: a simple and accurate method to fool deep neural networks. In Proc. CVPR, 2016.
  46. {{\{{Jump-Starting}}\}} multivariate time series anomaly detection for online service systems. In 2021 USENIX Annual Technical Conference (USENIX ATC 21), pages 413–426, 2021.
  47. Towards deep learning models resistant to adversarial attacks. In Proc. ICLR, 2018.
  48. Communication-efficient learning of deep networks from decentralized data. In AISTATS, 2017.
  49. Adversarial learning in statistical classification: A comprehensive review of defenses against attacks. Proceedings of the IEEE, 108:402–433, 2020.
  50. When not to classify: Anomaly detection of attacks (ADA) on DNN classifiers at test time. Neural Computation, 31(8):1624–1670, 2019.
  51. Survey on security issues in vehicular ad hoc networks. Alexandria Engineering Journal, 54(4):1115–1126, 2015.
  52. the White House OSTP. Blueprint for an AI bill of rights, 2022.
  53. The limitations of deep learning in adversarial settings. In Proc. 1st IEEE European Symp. on Security and Privacy, 2016.
  54. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, page 506–519, 2017.
  55. Learning memory-guided normality for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14372–14381, 2020.
  56. Robust adversarial reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, 2017.
  57. Long-range lidar and free-space data communication with high-performance optical phased arrays. IEEE Journal of Selected Topics in Quantum Electronics, 25(5):1–8, 2019.
  58. Federal Register. Occupant Protection for Vehicles With Automated Driving Systems. https://www.govinfo.gov/content/pkg/FR-2022-03-30/pdf/2022-05426.pdf, 2022.
  59. Collaborative perception for autonomous driving: Current status and future trend. In Zhang Ren, Mengyi Wang, and Yongzhao Hua, editors, Proceedings of 2021 5th Chinese Conference on Swarm Intelligence and Cooperative Control, 2023.
  60. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
  61. The odds are odd: A statistical test for detecting adversarial examples. In International Conference on Machine Learning, pages 5498–5507. PMLR, 2019.
  62. Provably robust deep learning via adversarially trained smoothed classifiers. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, 2019.
  63. An accurate and efficient collaborative intrusion detection framework to secure vehicular networks. Computers & Electrical Engineering, 43:33–47, 2015.
  64. Adversarial training for free! In Advances in Neural Information Processing Systems, 2019.
  65. Coordination of multi-robot path planning for warehouse application using smart approach for identifying destinations. Intelligent Service Robotics, 14:313–325, 2021.
  66. Membership inference attacks against machine learning models. In Proc. IEEE Symposium on Security and Privacy, 2017.
  67. Government regulations in cyber security: Framework, standards and recommendations. Future Generation Computer Systems, 92:178–188, 2019.
  68. John D. Storey. A direct approach to false discovery rates. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64(3):479–498, 2002.
  69. Intriguing properties of neural networks. In Proc. ICLR, 2014.
  70. Yoshiyasu Takefuji. Connected vehicle security vulnerabilities [commentary]. IEEE Technology and Society Magazine, 2018.
  71. Searching efficient 3d architectures with sparse point-voxel convolution. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVIII, pages 685–702. Springer, 2020.
  72. Fooling automated surveillance cameras: Adversarial patches to attack person detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019.
  73. Stealing machine learning models via prediction apis. In Proc. USENIX Security Symposium, 2016.
  74. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations, 2018.
  75. Adversarial attacks on multi-agent communication. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 7748–7757, Oct 2021.
  76. Vladimir Vovk. Conditional validity of inductive conformal predictors. In Proceedings of the Asian Conference on Machine Learning, pages 475–490, 04–06 Nov 2012.
  77. Advsim: Generating safety-critical scenarios for self-driving vehicles. In Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
  78. Formal security analysis of neural networks using symbolic intervals. In Proceedings of the 27th USENIX Conference on Security Symposium, SEC’18, 2018.
  79. V2vnet: Vehicle-to-vehicle communication for joint perception and prediction. In European Conference on Computer Vision, pages 605–621, 2020.
  80. Detectorguard: Provably securing object detectors against localized patch hiding attacks. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, CCS ’21, 2021.
  81. Objectseeker: Certifiably robust object detection against patch hiding attacks via patch-agnostic masking. arXiv preprint arXiv:2202.01811, 2022.
  82. Adversarial examples for semantic segmentation and object detection. In International Conference on Computer Vision. IEEE, 2017.
  83. Multi-uav cooperative system for search and rescue based on yolov5. International Journal of Disaster Risk Reduction, 76:102972, 2022.
  84. Adversarial t-shirt! evading person detectors in a physical world. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, ECCV, 2020.
  85. V2x-vit: Vehicle-to-everything cooperative perception with vision transformer. arXiv preprint arXiv:2203.10638, 2022.
  86. Task allocation of intelligent warehouse picking system based on multi-robot coalition. KSII Transactions on Internet & Information Systems, 13(7), 2019.
  87. Center-based 3d object detection and tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11784–11793, 2021.
  88. Dair-v2x: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21361–21370, 2022.
  89. A tri-layer plugin to improve occluded detection. In 33rd British Machine Vision Conference 2022, BMVC 2022, London, UK, November 21-24, 2022. BMVA Press, 2022.
  90. Seeing isn’t believing: Towards more robust adversarial attack against real world object detectors. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS ’19, 2019.
  91. Tracking objects as points. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV, pages 474–490. Springer, 2020.
  92. Yibo Zhou. Rethinking reconstruction autoencoder-based out-of-distribution detection. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7369–7377, 2022.
  93. Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4490–4499, 2018.
  94. Deformable detr: Deformable transformers for end-to-end object detection. In International Conference on Learning Representations.
  95. Zeljka Zorz. Researchers hack BMW cars, discover 14 vulnerabilities. https://www.helpnetsecurity.com/2018/05/23/hack-bmw-cars, 2018.
Citations (5)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.