Papers
Topics
Authors
Recent
2000 character limit reached

DiPPeST: Diffusion-based Path Planner for Synthesizing Trajectories Applied on Quadruped Robots (2405.19232v1)

Published 29 May 2024 in cs.RO

Abstract: We present DiPPeST, a novel image and goal conditioned diffusion-based trajectory generator for quadrupedal robot path planning. DiPPeST is a zero-shot adaptation of our previously introduced diffusion-based 2D global trajectory generator (DiPPeR). The introduced system incorporates a novel strategy for local real-time path refinements, that is reactive to camera input, without requiring any further training, image processing, or environment interpretation techniques. DiPPeST achieves 92% success rate in obstacle avoidance for nominal environments and an average of 88% success rate when tested in environments that are up to 3.5 times more complex in pixel variation than DiPPeR. A visual-servoing framework is developed to allow for real-world execution, tested on the quadruped robot, achieving 80% success rate in different environments and showcasing improved behavior than complex state-of-the-art local planners, in narrow environments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. N. Kottege, L. Sentis, and D. Kanoulas, “Editorial: Towards real-world deployment of legged robots,” Frontiers in Robotics and AI, vol. 8, 2022.
  2. K. Cai, C. Wang, J. Cheng, C. W. De Silva, and M. Q.-H. Meng, “Mobile Robot Path Planning in Dynamic Environments: A Survey,” arXiv preprint arXiv:2006.14195, 2020.
  3. J. Liu, M. Stamatopoulou, and D. Kanoulas, “Dipper: Diffusion-based 2d path planner applied on legged robots,” in IEEE International Conference on Robotics and Automation (ICRA), 2024.
  4. J. Liu, S. Lyu, D. Hadjivelichkov, V. Modugno, and D. Kanoulas, “ViT-A*: Legged Robot Path Planning using Vision Transformer A,” in IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids), 2023, pp. 1–6.
  5. C. Chi, S. Feng, Y. Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song, “Diffusion Policy: Visuomotor Policy Learning via Action Diffusion,” in Robotics: Science and Systems (RSS), 2023.
  6. J. Carvalho, A. T. Le, M. Baierl, D. Koert, and J. Peters, “Motion Planning Diffusion: Learning and Planning of Robot Motions with Diffusion Models,” arXiv preprint arXiv:2308.01557, 2023.
  7. M. Janner, Y. Du, J. Tenenbaum, and S. Levine, “Planning with Diffusion for Flexible Behavior Synthesis,” in International Conference on Machine Learning, 2022.
  8. A. Sridhar, D. Shah, C. Glossop, and S. Levine, “Nomad: Goal masked diffusion policies for navigation and exploration,” 2023.
  9. D. Kanoulas, N. G. Tsagarakis, and M. Vona, “Curved Patch Mapping and Tracking for Irregular Terrain Modeling: Application to Bipedal Robot Foot Placement,” Robotics and Autonomous Systems, vol. 119, pp. 13–30, 2019.
  10. R. Saeed, D. R. Recupero, and P. Remagnino, “A boundary node method for path planning of mobile robots,” Robotics and Autonomous Systems, vol. 123, p. 103320, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0921889018307310
  11. F. Yang, C. Wang, C. Cadena, and M. Hutter, “iplanner: Imperative path planning,” 2023.
  12. P. Roth, J. Nubert, F. Yang, M. Mittal, and M. Hutter, “Viplanner: Visual semantic imperative learning for local navigation,” 2023.
  13. D. H. Lee, S. S. Lee, C. K. Ahn, P. Shi, and C.-C. Lim, “Finite distribution estimation-based dynamic window approach to reliable obstacle avoidance of mobile robot,” IEEE Transactions on Industrial Electronics, vol. 68, no. 10, pp. 9998–10 006, 2021.
  14. M. Missura and M. Bennewitz, “Predictive collision avoidance for the dynamic window approach,” in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 8620–8626.
  15. E. J. Molinos, Ángel Llamazares, and M. Ocaña, “Dynamic window based approaches for avoiding obstacles in moving,” Robotics and Autonomous Systems, vol. 118, pp. 112–130, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0921889018309746
  16. L. Chang, L. Shan, C. Jiang, and Y. Dai, “Reinforcement based mobile robot path planning with improved dynamic window approach in unknown environment,” Autonomous Robots, vol. 45, no. 1, pp. 51–76, 2021. [Online]. Available: https://doi.org/10.1007/s10514-020-09947-4
  17. Y. Kantaros, S. Kalluraya, Q. Jin, and G. J. Pappas, “Perception-based temporal logic planning in uncertain semantic maps,” IEEE Transactions on Robotics, vol. 38, no. 4, pp. 2536–2556, 2022.
  18. J. Crespo, J. Castillo, O. Mozos, and R. Barber, “Semantic information for robot navigation: A survey,” Applied Sciences, vol. 10, p. 497, 2020. [Online]. Available: https://doi.org/10.3390/app10020497
  19. J. K. Johnson, “Visual Servoing for Mobile Ground Navigation,” in 88th IEEE Vehicular Technology Conference, VTC Fall 2018, Chicago, IL, USA, August 27-30, 2018, 2018, pp. 1–5.
  20. J. Rodziewicz-Bielewicz and M. Korzeń, “Vision-based mobile robots control along a given trajectory,” in Artificial Intelligence and Soft Computing, L. Rutkowski, R. Scherer, M. Korytkowski, W. Pedrycz, R. Tadeusiewicz, and J. M. Zurada, Eds.   Cham: Springer Nature Switzerland, 2023, pp. 69–77.
  21. Z. Zhu, R. Wang, and X. Zhang, “Visible rrt*: Asymptotically optimal random search tree for visual servo tasks with the fov constraint,” in 2023 42nd Chinese Control Conference (CCC), 2023, pp. 4633–4638.
  22. S. Hong, J. Lu, and D. P. Filev, “Dynamic Diffusion Maps-based Path Planning for Real-time Collision Avoidance of Mobile Robots,” in IEEE Intelligent Vehicles Symposium (IV), 2018, pp. 2224–2229.
  23. H. Ali, S. Murad, and Z. Shah, “Spot the Fake Lungs: Generating Synthetic Medical Images Using Neural Diffusion Models,” in Artificial Intelligence and Cognitive Science, L. Longo and R. O’Reilly, Eds.   Cham: Springer Nature Switzerland, 2023, pp. 32–39.
  24. N. Sharmin and R. Brad, “Optimal filter estimation for lucas-kanade optical flow,” Sensors (Basel), 2012.
Citations (7)

Summary

  • The paper introduces DiPPeST, a diffusion-based planner that fuses global trajectories with real-time local refinements from RGB input without retraining.
  • It innovatively integrates a global planner based on DiPPeR with a local ROI-driven method for dynamic navigational adjustments.
  • DiPPeST achieves up to 92% obstacle avoidance in simulation and 80% success in real-world tests, highlighting its robust performance.

Insights into DiPPeST: A Diffusion-Based Path Planner for Quadruped Robots

The paper entitled “DiPPeST: Diffusion-based Path Planner for Synthesizing Trajectories Applied on Quadruped Robots” introduces an innovative approach to robotic path planning, particularly suited for quadrupedal robots operating in complex environments. The proposed method, DiPPeST, leverages a diffusion-based model to generate both global and local paths, demonstrating advanced adaptability and performance without requiring retraining when transitioning from simulation to real-world scenarios.

Key Contributions and Methodology

The central contribution of this work is the development of DiPPeST, which builds upon a previously established diffusion-based global path planner named DiPPeR. The novel aspect of DiPPeST is its zero-shot adaptation capability, allowing it to implement local path refinements in real-time based on RGB camera input. This is achieved without necessitating any additional training or fine-tuning. Importantly, the approach sidesteps traditional environment representation methods, such as semantic or geometric mappings, relying solely on visual input for trajectory planning.

DiPPeST's design incorporates an integrated global and local planning strategy. The global planner utilizes DiPPeR to generate an initial end-to-end trajectory without relying on a pre-mapped environment. The local planner, a new addition to this framework, adapts the trajectory in real-time by evaluating camera frames to navigate around obstacles, incorporating a novel Region of Interest (ROI) processing method that dynamically selects local goals within the current frame.

To facilitate real-world application, a robust visual servoing framework is implemented. This component translates the planned 2D image-based trajectories into actionable 3D movements for quadruped robots. Importantly, the methodology maintains constant inference speed, which is crucial for maintaining consistent performance during dynamic path planning.

Numerical Results and Performance

The paper reports DiPPeST achieving a 92% success rate in obstacle avoidance within nominal, relatively uncomplicated environments. This performance slightly decreases to an 88% success rate in more challenging environments, demonstrating its robustness against increased complexity and intensity variation in visual inputs. Furthermore, the real-world execution on a quadrupedal robot showcases an 80% success rate across various environments, marking an improvement compared to other state-of-the-art local planners, particularly in narrow environments.

The ability to maintain performance despite variations in camera perspective, input image size, and environmental conditions highlights the adaptability of DiPPeST. The framework's capacity to translate its learned behaviors from simulated environments to real-world applications without retraining is an essential characteristic that underscores its practical utility in robotics.

Implications and Future Directions

This research provides significant insights into the potential of diffusion models for robotic path planning, particularly illustrating how they can be adapted for real-time operation in unstructured environments. The ability to function effectively without retraining enhances the practicability of this approach for various real-world applications, from industrial automation to search and rescue operations.

Future developments could explore increasing the generalization capabilities of DiPPeST through adaptation strategies that incorporate more diverse training datasets, potentially integrating kinodynamic properties into the planning process to optimize both trajectory feasibility and execution. Furthermore, improving inference times through advancements in the diffusion process could facilitate more responsive replanning in dynamically changing environments.

In conclusion, DiPPeST represents a noteworthy advancement in the field of robotic path planning. Its innovative approach provides a strong foundation for future research and development aimed at enhancing the autonomy and adaptability of mobile robots operating in complex real-world settings. The paper distinctly contributes to the expanding body of knowledge on diffusion models in robotics, setting the stage for further exploration and optimization in this promising domain.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 4 tweets and received 6 likes.

Upgrade to Pro to view all of the tweets about this paper:

Youtube Logo Streamline Icon: https://streamlinehq.com