Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
Gemini 2.5 Pro
GPT-5
GPT-4o
DeepSeek R1 via Azure
2000 character limit reached

EnCoMP: Enhanced Covert Maneuver Planning with Adaptive Threat-Aware Visibility Estimation using Offline Reinforcement Learning (2403.20016v2)

Published 29 Mar 2024 in cs.RO and cs.LG

Abstract: Autonomous robots operating in complex environments face the critical challenge of identifying and utilizing environmental cover for covert navigation to minimize exposure to potential threats. We propose EnCoMP, an enhanced navigation framework that integrates offline reinforcement learning and our novel Adaptive Threat-Aware Visibility Estimation (ATAVE) algorithm to enable robots to navigate covertly and efficiently in diverse outdoor settings. ATAVE is a dynamic probabilistic threat modeling technique that we designed to continuously assess and mitigate potential threats in real-time, enhancing the robot's ability to navigate covertly by adapting to evolving environmental and threat conditions. Moreover, our approach generates high-fidelity multi-map representations, including cover maps, potential threat maps, height maps, and goal maps from LiDAR point clouds, providing a comprehensive understanding of the environment. These multi-maps offer detailed environmental insights, helping in strategic navigation decisions. The goal map encodes the relative distance and direction to the target location, guiding the robot's navigation. We train a Conservative Q-Learning (CQL) model on a large-scale dataset collected from real-world environments, learning a robust policy that maximizes cover utilization, minimizes threat exposure, and maintains efficient navigation. We demonstrate our method's capabilities on a physical Jackal robot, showing extensive experiments across diverse terrains. These experiments demonstrate EnCoMP's superior performance compared to state-of-the-art methods, achieving a 95% success rate, 85% cover utilization, and reducing threat exposure to 10.5%, while significantly outperforming baselines in navigation efficiency and robustness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. An extensive survey on potential field based path planning for autonomous mobile robot. In International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), pages 1004–1009, 2019.
  2. Autonomous robots for search and rescue missions. In IEEE International Conference on Robotics and Automation (ICRA), 2019.
  3. Closing the sim-to-real loop: Adapting simulation randomization with real world experience. In IEEE International Conference on Robotics and Automation (ICRA), 2019.
  4. Supervised learning for traversable terrain classification. IEEE Transactions on Robotics, 2018.
  5. Wikipedia contributors. Bresenham’s line algorithm — Wikipedia, the free encyclopedia, 2024. [Online; accessed 24-March-2024].
  6. Learning stealth behaviors with multi-objective deep reinforcement learning. arXiv preprint arXiv:2006.10593, 2020.
  7. Off-policy deep reinforcement learning without exploration. In International Conference on Machine Learning, pages 2052–2062, 2019.
  8. Conservative navigation strategies for autonomous robots. Robotics and Autonomous Systems, 2017.
  9. Efficient navigation in cluttered environments. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018.
  10. Self-supervised policy adaptation during deployment. In International Conference on Learning Representations (ICLR), 2021.
  11. Covernav: Cover following navigation planning in unstructured outdoor environment with deep reinforcement learning. In 2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS), pages 127–132. IEEE, 2023.
  12. Offline reinforcement learning for efficient and robust robot manipulation. In IEEE International Conference on Robotics and Automation (ICRA), 2021.
  13. Online reinforcement learning for outdoor navigation. Journal of Field Robotics, 2018.
  14. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179–1191, 2020.
  15. Stealth navigation using pre-defined environmental models. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017.
  16. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020a.
  17. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020b.
  18. Lidar-based 3d object detection for autonomous vehicles. In IEEE Intelligent Vehicles Symposium (IV), 2016.
  19. Reinforcement learning for autonomous navigation: A survey. IEEE Transactions on Intelligent Transportation Systems, 2020.
  20. Iris: Implicit reinforcement without interaction at scale for learning control from offline robot manipulation data. In IEEE International Conference on Robotics and Automation (ICRA), pages 4414–4420, 2020.
  21. Covert robotics: Covert path planning in unknown environments. In Australasian Conference on Robotics and Automation (ACRA), 2006.
  22. Reinforcement learning with optimized reward function for stealth applications. Entertainment Computing, 25:37–47, 2018.
  23. Terrain traversability analysis using supervised learning. In IEEE International Conference on Robotics and Automation (ICRA), 2019.
  24. Deep reinforcement learning robot for search and rescue applications: Exploration in unknown cluttered environments. IEEE Robotics and Automation Letters, 4(2):610–617, 2019.
  25. Pangiotis Papadakis. Terrain traversability analysis methods for unmanned ground vehicles: A survey. Engineering Applications of Artificial Intelligence, 26(4):1373–1385, 2013.
  26. Deep reinforcement learning for robot navigation. In IEEE International Conference on Robotics and Automation (ICRA), 2019.
  27. Stealth navigation in known environments. Journal of Field Robotics, 2016.
  28. A survey on offline reinforcement learning: Taxonomy, review, and open problems. IEEE Transactions on Neural Networks and Learning Systems, 2023.
  29. Frozone: Freezing-free, pedestrian-friendly navigation in human crowds. IEEE Robotics and Automation Letters, 5(3):4352–4359, 2020.
  30. Terrapn: Unstructured terrain navigation using online self-supervised learning. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 7197–7204, 2022.
  31. Vern: Vegetation-aware robot navigation in dense unstructured outdoor environments. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 11233–11240. IEEE, 2023.
  32. Outdoor navigation using online reinforcement learning. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019.
  33. Reconnaissance robots for military applications. Journal of Military Robotics, 2018.
  34. Avoiding detection by abandoning perfect concealment. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1268–1273. IEEE, 2004.
  35. Sim-to-real transfer in reinforcement learning for robotics. IEEE Robotics and Automation Letters, 2017.
  36. Terp: Reliable planning in uneven outdoor environments using deep reinforcement learning. In IEEE International Conference on Robotics and Automation (ICRA), pages 9447–9453, 2022.
  37. Vapor: Holonomic legged robot navigation in outdoor vegetation using offline reinforcement learning. arXiv preprint arXiv:2309.07832, 2023.
  38. Surveillance robots: A survey. Robotics and Autonomous Systems, 2020.
  39. Lidar-based terrain classification for autonomous driving. Journal of Field Robotics, 2017.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.