Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Adversarial Attacks Detection based on Explainable Deep Reinforcement Learning For UAV Guidance and Planning (2206.02670v4)

Published 6 Jun 2022 in cs.LG, cs.AI, cs.CR, and cs.RO

Abstract: The dangers of adversarial attacks on Uncrewed Aerial Vehicle (UAV) agents operating in public are increasing. Adopting AI-based techniques and, more specifically, Deep Learning (DL) approaches to control and guide these UAVs can be beneficial in terms of performance but can add concerns regarding the safety of those techniques and their vulnerability against adversarial attacks. Confusion in the agent's decision-making process caused by these attacks can seriously affect the safety of the UAV. This paper proposes an innovative approach based on the explainability of DL methods to build an efficient detector that will protect these DL schemes and the UAVs adopting them from attacks. The agent adopts a Deep Reinforcement Learning (DRL) scheme for guidance and planning. The agent is trained with a Deep Deterministic Policy Gradient (DDPG) with Prioritised Experience Replay (PER) DRL scheme that utilises Artificial Potential Field (APF) to improve training times and obstacle avoidance performance. A simulated environment for UAV explainable DRL-based planning and guidance, including obstacles and adversarial attacks, is built. The adversarial attacks are generated by the Basic Iterative Method (BIM) algorithm and reduced obstacle course completion rates from 97\% to 35\%. Two adversarial attack detectors are proposed to counter this reduction. The first one is a Convolutional Neural Network Adversarial Detector (CNN-AD), which achieves accuracy in the detection of 80\%. The second detector utilises a Long Short Term Memory (LSTM) network. It achieves an accuracy of 91\% with faster computing times compared to the CNN-AD, allowing for real-time adversarial detection.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Autonomous uav navigation: A ddpg-based deep reinforcement learning approach. In 2020 IEEE International Symposium on Circuits and Systems (ISCAS), pages 1–5. IEEE, 2020.
  2. N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy, 2017.
  3. Provably minimally-distorted adversarial examples. arXiv preprint arXiv:1709.10207, 2017.
  4. When explainability meets adversarial learning: Detecting adversarial examples using shap signatures. In 2020 International Joint Conference on Neural Networks, IJCNN 2020, page 9207637. Institute of Electrical and Electronics Engineers Inc., 2020.
  5. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pages 1587–1596. PMLR, 2018.
  6. Goldman-Sachs. Drones - reporting for work, 2015.
  7. Explaining and harnessing adversarial examples. arXiv preprint, 2014.
  8. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  9. An autonomous path planning model for unmanned ships based on deep reinforcement learning. Sensors, 20(2):426, 2020.
  10. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pages 1861–1870. PMLR, 2018.
  11. Online robust policy learning in the presence of unknown adversaries. Advances in neural information processing systems, 31, 2018.
  12. Explainable deep reinforcement learning for uav autonomous path planning. Aerospace Science and Technology, 118:107052, 2021.
  13. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  14. A novel ddpg method with prioritized experience replay. In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 316–321, 2017.
  15. Adversarial attack against lstm-based ddos intrusion detection system. In 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI), pages 686–693, 2020.
  16. Challenges and countermeasures for adversarial attacks on deep reinforcement learning. IEEE Transactions on Artificial Intelligence, 3(2):90–109, 2022.
  17. Adversarial examples in the physical world. In Artificial intelligence safety and security, pages 99–112. Chapman and Hall/CRC, 2018.
  18. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
  19. Detecting adversarial attacks on neural network policies with visual foresight. arXiv preprint arXiv:1710.00814, 2017.
  20. Drl-utps: Drl-based trajectory planning for unmanned aerial vehicles for data collection in dynamic iot network. IEEE Transactions on Intelligent Vehicles, pages 1–14, 2022.
  21. From local explanations to global understanding with explainable ai for trees. Nature machine intelligence, 2(1):56–67, 2020.
  22. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.
  23. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
  24. Learning to navigate in complex environments. arXiv preprint, 2016.
  25. Human-level control through deep reinforcement learning. Nature, 2015.
  26. Lstm and simple rnn comparison in the problem of sequence to sequence on conversation data using bahasa indonesia. In 2018 Indonesian Association for Pattern Recognition International Conference (INAPR), pages 51–56. IEEE, 2018.
  27. ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016.
  28. Trust region policy optimization. In International conference on machine learning, pages 1889–1897. PMLR, 2015.
  29. Airsim: High-fidelity visual and physical simulation for autonomous vehicles. In Field and service robotics, pages 621–635. Springer, 2018.
  30. Obstacle avoidance drone by deep reinforcement learning and its racing with human pilot. Applied Sciences, 9(24), 2019.
  31. Learning important features through propagating activation differences. In International conference on machine learning, pages 3145–3153. PMLR, 2017.
  32. Real-time adversarial perturbations against deep reinforcement learning policies: attacks and defenses. In Computer Security–ESORICS 2022: 27th European Symposium on Research in Computer Security, Copenhagen, Denmark, September 26–30, 2022, Proceedings, Part III, pages 384–404. Springer, 2022.
  33. Deep reinforcement learning for nlp. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, 2018.
  34. A pca-based model to predict adversarial examples on q-learning of path finding. In 2018 IEEE Third International Conference on Data Science in Cyberspace (DSC), pages 773–780. IEEE Computer Society, 2018.
Citations (35)

Summary

We haven't generated a summary for this paper yet.