Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 70 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 37 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 21 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

DRIFT: Deep Reinforcement Learning for Intelligent Floating Platforms Trajectories (2310.04266v2)

Published 6 Oct 2023 in cs.RO and cs.AI

Abstract: This investigation introduces a novel deep reinforcement learning-based suite to control floating platforms in both simulated and real-world environments. Floating platforms serve as versatile test-beds to emulate micro-gravity environments on Earth, useful to test autonomous navigation systems for space applications. Our approach addresses the system and environmental uncertainties in controlling such platforms by training policies capable of precise maneuvers amid dynamic and unpredictable conditions. Leveraging Deep Reinforcement Learning (DRL) techniques, our suite achieves robustness, adaptability, and good transferability from simulation to reality. Our deep reinforcement learning framework provides advantages such as fast training times, large-scale testing capabilities, rich visualization options, and ROS bindings for integration with real-world robotic systems. Being open access, our suite serves as a comprehensive platform for practitioners who want to replicate similar research in their own simulated environments and labs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. J. Kopacz, R. Herschitz, and J. Roney, “Small satellites an overview and assessment,” Acta Astronautica, vol. 170, pp. 93–105, 2020.
  2. G. Curzi, D. Modenini, and P. Tortora, “Large constellations of small satellites: a survey of near future challenges and missions,” Aerospace, vol. 7, p. 133, 2020.
  3. M. B. Quadrelli, L. J. Wood, J. E. Riedel, M. McHenry, M. Aung, L. A. Cangahuala, R. Volpe, P. Beauchamp, and J. A. Cutts, “Guidance, navigation, and control technology assessment for future planetary science missions,” Journal of Guidance, Control, and Dynamics, vol. 38, pp. 1165–1186, 2015.
  4. T. Rybus and K. Seweryn, “Planar air-bearing microgravity simulators: Review of applications, existing solutions and design parameters,” Acta Astronautica, vol. 120, pp. 239–259, 2016.
  5. M. El-Hariry, A. Richard, and M. Olivares-Mendez, “Rans: Highly-parallelised simulator for reinforcement learning based autonomous navigating spacecrafts,” 2023.
  6. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
  7. A. Banerjee, S. Satpute, C. Kanellakis, I. Tevetzidis, J. Haluka, P. Bodin, and G. Nikolakopoulos, “On the design, modeling and experimental verification of a floating satellite platform,” IEEE Robotics and Automation Letters, vol. 7, pp. 1364–1371, 2022.
  8. Z. Huang, W. Zhang, T. Chen, H. Wen, and D. Jin, “Characterizing an air-bearing testbed for simulating spacecraft dynamics and control,” Aerospace, vol. 9, p. 246, 2022.
  9. M. Sabatini, P. Gasbarri, and G. B. Palmerini, “Coordinated control of a space manipulator tested by means of an air bearing free floating platform,” Acta Astronautica, vol. 139, pp. 296–305, 2017.
  10. L. Santaguida and Z. H. Zhu, “Development of air-bearing microgravity testbed for autonomous spacecraft rendezvous and robotic capture control of a free-floating target,” Acta Astronautica, vol. 203, pp. 319–328, 2023.
  11. S. Vyas, M. Zwick, and M. Olivares-Mendez, “Trajectory optimization and following for a three degrees of freedom overactuated floating platform,” 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022.
  12. S. Kwok-Choon, K. Buchala, B. Blackwell, S. Lopresti, M. Wilde, and T. Go, “Design, fabrication, and preliminary testing of air-bearing test vehicles for the study of autonomous satellite maneuvers,” in Proceedings of the 31st Florida Conference on Recent Advances in Robotics, Orlando, FL, USA, 2018, pp. 10–11.
  13. C. Nieto-Peroy, G. Palmerini, E. J. de Oliveira, P. Gasbarri, M. Sabatini, and M. Milz, “Simulation of spacecraft formation maneuvers by means of floating platforms,” in 2021 IEEE Aerospace Conference (50100), 2021, pp. 1–10.
  14. J. D. Wapman, D. C. Sternberg, K. Lo, M. Wang, L. Jones-Wilson, and S. Mohan, “Jet propulsion laboratory small satellite dynamics testbed planar air-bearing propulsion system characterization,” Journal of Spacecraft and Rockets, vol. 58, no. 4, pp. 954–971, 2021.
  15. Y. Cao, S. Wang, X. Zheng, W. Ma, X. Xie, and L. Liu, “Reinforcement learning with prior policy guidance for motion planning of dual-arm free-floating space robot,” Aerospace Science and Technology, vol. 136, p. 108098, 2023.
  16. B. Gaudet, R. Linares, and R. Furfaro, “Deep reinforcement learning for six degree-of-freedom planetary landing,” Advances in Space Research, vol. 65, no. 7, pp. 1723–1741, 2020.
  17. X. Yu, P. Wang, and Z. Zhang, “Learning-based end-to-end path planning for lunar rovers with safety constraints,” Sensors, vol. 21, p. 796, 2021.
  18. T. Tanaka, M. Cescon, and H. A. Malki, “Linear quadratic tracking with reinforcement learning based reference trajectory optimization for the lunar hopper in simulated environment,” IEEE Access, vol. 9, pp. 162 973–162 983, 2021.
  19. S. Willis, D. Izzo, and D. Hennes, “Reinforcement learning for spacecraft maneuvering near small bodies,” 2016. [Online]. Available: https://api.semanticscholar.org/CorpusID:150378389
  20. D. M. Chan and A.-a. Agha-mohammadi, “Autonomous imaging and mapping of small bodies using deep reinforcement learning,” in 2019 IEEE Aerospace Conference, 2019, pp. 1–12.
  21. K. Hovell and S. Ulrich, “On deep reinforcement learning for spacecraft guidance,” AIAA Scitech 2020 Forum, 2020.
  22. B. C. Yalçin, C. Martinez, S. Coloma, E. Skrzypczyk, and M. A. Olivares-Mendez, “Lightweight floating platform for ground-based emulation of on-orbit scenarios,” IEEE Access, vol. 11, pp. 94 575–94 588, 2023.
  23. V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, A. Handa et al., “Isaac gym: High performance gpu-based physics simulation for robot learning,” arXiv preprint arXiv:2108.10470, 2021.
  24. L. Biewald, “Experiment tracking with weights and biases,” 2020, software available from wandb.com. [Online]. Available: $https://www.wandb.com/$
  25. G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “Openai gym,” arXiv preprint arXiv:1606.01540, 2016.
  26. D. Makoviichuk and V. Makoviychuk, “rl-games: A high-performance framework for reinforcement learning,” May 2021. [Online]. Available: https://github.com/Denys88/rl_games
  27. E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for model-based control,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 5026–5033.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub