Camouflage Adversarial Attacks on Multiple Agent Systems (2401.17405v1)
Abstract: The multi-agent reinforcement learning systems (MARL) based on the Markov decision process (MDP) have emerged in many critical applications. To improve the robustness/defense of MARL systems against adversarial attacks, the study of various adversarial attacks on reinforcement learning systems is very important. Previous works on adversarial attacks considered some possible features to attack in MDP, such as the action poisoning attacks, the reward poisoning attacks, and the state perception attacks. In this paper, we propose a brand-new form of attack called the camouflage attack in the MARL systems. In the camouflage attack, the attackers change the appearances of some objects without changing the actual objects themselves; and the camouflaged appearances may look the same to all the targeted recipient (victim) agents. The camouflaged appearances can mislead the recipient agents to misguided actions. We design algorithms that give the optimal camouflage attacks minimizing the rewards of recipient agents. Our numerical and theoretical results show that camouflage attacks can rival the more conventional, but likely more difficult state perception attacks. We also investigate cost-constrained camouflage attacks and showed numerically how cost budgets affect the attack performance.
- “Safe, multi-agent, reinforcement learning for autonomous driving,” in arXiv preprint arXiv:1610.03295, 2016.
- “Finrl: A deep reinforcement learning library for automated stock trading in quantitative finance,” in Deep RL Workshop. NeurIPS, 2020.
- “Deep reinforcement learning for page-wise recommendations,” in The 12th ACM Conference on Recommender Systems. ACM, 2018, pp. 95–103.
- “Autonomous drone racing with deep reinforcement learning,” in International Conference on Intelligent Robots and Systems (IROS). IEEE/RSJ, 2021.
- “Provably efficient black-box action poisoning attacks against reinforcement learning,” in Advances in Neural Information Processing Systems, 2021, vol. 34, pp. 12400–12410.
- “Who is the strongest enemy? towards optimal and efficient evasion attacks in deep rl,” in International Conference on Learning Representations, 2022.
- “Tactics of adversarial attack on deep reinforcement learning agents,” in The 26th International Joint Conference on Artificial Intelligence, 2017, p. 3756–3762.
- “Reward poisoning in reinforcement learning: attacks against unknown learners in unknown environments,” in arXiv preprint arXiv:2102.08492, 2021.
- “Vulnerability-aware poisoning mechanism for online rl with unknown dynamics,” in International Conference on Learning Representations, 2021.
- “Defense against reward poisoning attacks in reinforcement learning,” in arXiv preprint arXiv:2102.05776, 2021.
- “Robust policy gradient against strong data corruption,” in International Conference on Machine Learning,, 2021, pp. 12391–12401.
- “Improved corruption robust algorithms for episodic reinforcement learning,” in International Conference on Machine Learning, 2021, pp. 1561–1570.
- “Corruption-robust exploration in episodic reinforcement learning,” in Conference on Learning Theory, 2021, pp. 3242–3245.
- “V-learning—a simple, efficient, decentralized algorithm for multiagent reinforcement learning,” in Mathematics of Operations Research, 2021.
- “Efficient adversarial attacks on online multi-agent reinforcement learning,” in arXiv preprint arXiv:2307.07670, 2023.
- “Mfvfd : A multi-agent q-learning approach to cooperative and non-cooperative tasks,” in the Thirtieth International Joint Conference on Artificial Intelligence, 2021.
- “Deceptive reinforcement learning under adversarial manipulations on cost signals,” in International Conference on Decision and Game Theory for Security. Springer, 2019, pp. 217–237.
- “Adaptive reward-poisoning attacks against reinforcement learning,” in International Conference on Machine Learning, 2020, vol. 119, pp. 11225–11234.
- “Vunerability of deep reinforcement learning to policy induction attacks,” in International Conference on Machine Learning and Data Mining in Pattern Recogonition. Springer, 2017, pp. 262–275.
- “Policy poisoning in batch reinforcement learning and control,” in Advances in Neural Information Processing Systems, 2019, vol. 32.
- Michael Littman, “Markov games as a framework for multi-agent reinforcement learning,” in Proceedings of the Eleventh International Conference, 1994.
- “Policy resilience to environment poisoning attacks on reinforcement learning,” in arXiv preprint arXiv:2304.12151, 2023.
- “Deceptive reinforcement learning under adversarial manipulations on cost signals,” in International Conference on Decision and Game Theory for Security, 2019.
- “Robust deep reinforcement learning against adversarial perturbations on state observations,” in International Conference on Decision and Game Theory for Security, 2020.
- “Robust reinforcement learning on state observations with learned optimal adversary,” in International Conference on Learning Representations, 2021.
- “Optimal cost constrained adversarial attacks for multiple agent systems,” in arXiv preprint arXiv:2311.00859, 2023.
- “A secure learning control strategy via dynamic camouflaging for unknown dynamical systems under attacks,” in 2021 Control Technology and Applications (CCTA),, 2021.
- “Research on camouflaged human target detection based on deep learning,” in Computational Intelligence and Neuroscience, 2022, vol. 2022, p. 7703444.
- “Malicious code dynamic traffic camouflage detection based on deep reinforcement learning in power system,” in 2021 International Conference on New Energy and Power Engineering (ICNEPE 2021), 2021.
- Ziqing Lu (7 papers)
- Guanlin Liu (16 papers)
- Lifeng Lai (64 papers)
- Weiyu Xu (78 papers)