Manipulating Neural Path Planners via Slight Perturbations (2403.18256v1)
Abstract: Data-driven neural path planners are attracting increasing interest in the robotics community. However, their neural network components typically come as black boxes, obscuring their underlying decision-making processes. Their black-box nature exposes them to the risk of being compromised via the insertion of hidden malicious behaviors. For example, an attacker may hide behaviors that, when triggered, hijack a delivery robot by guiding it to a specific (albeit wrong) destination, trapping it in a predefined region, or inducing unnecessary energy expenditure by causing the robot to repeatedly circle a region. In this paper, we propose a novel approach to specify and inject a range of hidden malicious behaviors, known as backdoors, into neural path planners. Our approach provides a concise but flexible way to define these behaviors, and we show that hidden behaviors can be triggered by slight perturbations (e.g., inserting a tiny unnoticeable object), that can nonetheless significantly compromise their integrity. We also discuss potential techniques to identify these backdoors aimed at alleviating such risks. We demonstrate our approach on both sampling-based and search-based neural path planners.
- W. Zeng, W. Luo, S. Suo, A. Sadat, B. Yang, S. Casas, and R. Urtasun, “End-to-end interpretable neural motion planner,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 8660–8669.
- Y. Hu, J. Yang, L. Chen, K. Li, C. Sima, X. Zhu, S. Chai, S. Du, T. Lin, W. Wang, et al., “Planning-oriented autonomous driving,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 17 853–17 862.
- I. Streinu, “A combinatorial approach to planar non-colliding robot arm motion planning,” Proceedings 41st Annual Symposium on Foundations of Computer Science, pp. 443–453, 2000.
- T. Kunz, U. Reiser, M. Stilman, and A. W. Verl, “Real-time path planning for a robot arm in changing environments,” 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5906–5911, 2010.
- S. Choudhury, M. Bhardwaj, S. Arora, A. Kapoor, G. Ranade, S. A. Scherer, and D. Dey, “Data-driven planning via imitation learning,” The International Journal of Robotics Research, vol. 37, pp. 1632 – 1672, 2017.
- C. Paxton, V. Raman, G. Hager, and M. Kobilarov, “Combining neural networks and tree search for task and motion planning in challenging environments,” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6059–6066, 2017.
- A. H. Qureshi, A. Simeonov, M. J. Bency, and M. C. Yip, “Motion planning networks,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 2118–2124.
- T. Takahashi, H. Sun, D. Tian, and Y. Wang, “Learning heuristic functions for mobile robot path planning using deep neural networks,” in International Conference on Automated Planning and Scheduling, 2019.
- B. Chen, B. Dai, Q. Lin, G. Ye, H. Liu, and L. Song, “Learning to plan in high dimensions via neural exploration-exploitation trees,” in International Conference on Learning Representations, 2019.
- B. Ichter, E. Schmerling, T.-W. E. Lee, and A. Faust, “Learned critical probabilistic roadmaps for robotic motion planning,” 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 9535–9541, 2019.
- R. Yonetani, T. Taniai, M. Barekatain, M. Nishimura, and A. Kanezaki, “Path planning using neural a* search,” in International Conference on Machine Learning, 2020.
- J. J. Johnson, L. Li, A. H. Qureshi, and M. C. Yip, “Motion planning transformers: One model to plan them all,” arXiv preprint arXiv:2106.02791, 2021.
- N. Pérez-Higueras, F. Caballero, and L. Merino, “Learning human-aware path planning with fully convolutional networks,” in 2018 IEEE international conference on robotics and automation (ICRA). IEEE, 2018, pp. 5897–5902.
- B. Chen, B. Dai, Q. Lin, G. Ye, H. Liu, and L. Song, “Learning to plan in high dimensions via neural exploration-exploitation trees,” arXiv preprint arXiv:1903.00070, 2019.
- X. Chen, C. Liu, B. Li, K. Lu, and D. X. Song, “Targeted backdoor attacks on deep learning systems using data poisoning,” ArXiv, vol. abs/1712.05526, 2017.
- T. Gu, K. Liu, B. Dolan-Gavitt, and S. Garg, “Badnets: Evaluating backdooring attacks on deep neural networks,” IEEE Access, vol. 7, pp. 47 230–47 244, 2019.
- X. Zhang, Z. Zhang, and T. Wang, “Trojaning language models for fun and profit,” 2021 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 179–197, 2020.
- O. X.-E. Collaboration, “Open X-Embodiment: Robotic learning datasets and RT-X models,” https://arxiv.org/abs/2310.08864, 2023.
- Octo Model Team, D. Ghosh, H. Walke, K. Pertsch, K. Black, O. Mees, S. Dasari, J. Hejna, C. Xu, J. Luo, T. Kreiman, Y. Tan, D. Sadigh, C. Finn, and S. Levine, “Octo: An open-source generalist robot policy,” https://octo-models.github.io, 2023.
- G. Tao, G. Shen, Y. Liu, S. An, Q. Xu, S. Ma, and X. Zhang, “Better trigger inversion optimization in backdoor scanning,” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13 358–13 368, 2022.
- B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao, “Neural cleanse: Identifying and mitigating backdoor attacks in neural networks,” 2019 IEEE Symposium on Security and Privacy (SP), pp. 707–723, 2019.
- H. Qiu, Y. Zeng, S. Guo, T. Zhang, M. Qiu, and B. Thuraisingham, “Deepsweep: An evaluation framework for mitigating dnn backdoor attacks using data augmentation,” in Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, 2021, pp. 363–377.
- M. Du, R. Jia, and D. X. Song, “Robust anomaly detection and backdoor attack detection via differential privacy,” ArXiv, vol. abs/1911.07116, 2019.
- L. E. Kavraki, P. Svestka, J.-C. Latombe, and M. H. Overmars, “Probabilistic roadmaps for path planning in high-dimensional configuration spaces,” IEEE transactions on Robotics and Automation, vol. 12, no. 4, pp. 566–580, 1996.
- B. Ichter, J. Harrison, and M. Pavone, “Learning sampling distributions for robot motion planning,” in 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 7087–7094.
- Y. Li, Y. Jiang, Z. Li, and S.-T. Xia, “Backdoor learning: A survey.” [Online]. Available: http://arxiv.org/abs/2007.08745
- P. Kiourti, K. Wardega, S. Jha, and W. Li, “Trojdrl: Evaluation of backdoor attacks on deep reinforcement learning,” 2020 57th ACM/IEEE Design Automation Conference (DAC), pp. 1–6, 2020.
- L. Wang, Z. Javed, X. Wu, W. Guo, X. Xing, and D. X. Song, “Backdoorl: Backdoor attack against competitive reinforcement learning,” ArXiv, vol. abs/2105.00579, 2021.
- Z. Yang, N. Iyer, J. Reimann, and N. Virani, “Design of intentional backdoors in sequential models,” ArXiv, vol. abs/1902.09972, 2019.
- C. Gong, Z. Yang, Y. Bai, J. He, J. Shi, A. Sinha, B. Xu, X. Hou, G. Fan, and D. Lo, “Mind your data! hiding backdoors in offline reinforcement learning datasets,” ArXiv, vol. abs/2210.04688, 2022.
- S. M. LaValle, “Rapidly-exploring random trees: a new tool for path planning,” The annual research report, 1998.
- P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum cost paths,” IEEE transactions on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100–107, 1968.
- R. Malladi, J. Sethian, and B. Vemuri, “Shape modeling with front propagation: a level set approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 2, pp. 158–175, 1995.
- K. Liu, B. Dolan-Gavitt, and S. Garg, “Fine-pruning: Defending against backdooring attacks on deep neural networks.” [Online]. Available: http://arxiv.org/abs/1805.12185
- N. Lukas and F. Kerschbaum, “Pick your poison: Undetectability versus robustness in data poisoning attacks against deep image classification,” arXiv preprint arXiv:2305.09671, 2023.