Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Solving Multi-Entity Robotic Problems Using Permutation Invariant Neural Networks (2402.18345v1)

Published 28 Feb 2024 in cs.RO

Abstract: Challenges in real-world robotic applications often stem from managing multiple, dynamically varying entities such as neighboring robots, manipulable objects, and navigation goals. Existing multi-agent control strategies face scalability limitations, struggling to handle arbitrary numbers of entities. Additionally, they often rely on engineered heuristics for assigning entities among agents. We propose a data driven approach to address these limitations by introducing a decentralized control system using neural network policies trained in simulation. Leveraging permutation invariant neural network architectures and model-free reinforcement learning, our approach allows control agents to autonomously determine the relative importance of different entities without being biased by ordering or limited by a fixed capacity. We validate our approach through both simulations and real-world experiments involving multiple wheeled-legged quadrupedal robots, demonstrating their collaborative control capabilities. We prove the effectiveness of our architectural choice through experiments with three exemplary multi-entity problems. Our analysis underscores the pivotal role of the end-to-end trained permutation invariant encoders in achieving scalability and improving the task performance in multi-object manipulation or multi-goal navigation problems. The adaptability of our policy is further evidenced by its ability to manage varying numbers of entities in a zero-shot manner, showcasing near-optimal autonomous task distribution and collision avoidance behaviors.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (67)
  1. J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter, “Learning quadrupedal locomotion over challenging terrain,” Science robotics, vol. 5, no. 47, p. eabc5986, 2020.
  2. N. Rudin, D. Hoeller, P. Reist, and M. Hutter, “Learning to walk in minutes using massively parallel deep reinforcement learning,” in Conference on Robot Learning.   PMLR, 2022, pp. 91–100.
  3. T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter, “Learning robust perceptive locomotion for quadrupedal robots in the wild,” Science Robotics, vol. 7, no. 62, p. eabk2822, 2022.
  4. D. Hoeller, L. Wellhausen, F. Farshidian, and M. Hutter, “Learning a state representation and navigation in cluttered and dynamic environments,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 5081–5088, 2021.
  5. C. Chen, J. Frey, P. Arm, and M. Hutter, “Smug planner: A safe multi-goal planner for mobile robots in challenging environments,” IEEE Robotics and Automation Letters, 2023.
  6. H. Kondoh and A. Kanezaki, “Multi-goal audio-visual navigation using sound direction map,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2023, pp. 5219–5226.
  7. F. Wang and K. Hauser, “Stable bin packing of non-convex 3d objects with a robot manipulator,” in 2019 International Conference on Robotics and Automation (ICRA).   IEEE, 2019, pp. 8698–8704.
  8. J. Mahler and K. Goldberg, “Learning deep policies for robot bin picking by simulating robust grasping sequences,” in Conference on robot learning.   PMLR, 2017, pp. 515–524.
  9. R. Li, A. Jabri, T. Darrell, and P. Agrawal, “Towards practical multi-object manipulation using relational reinforcement learning,” in 2020 ieee international conference on robotics and automation (icra).   IEEE, 2020, pp. 4051–4058.
  10. C. Yang, G. N. Sue, Z. Li, L. Yang, H. Shen, Y. Chi, A. Rai, J. Zeng, and K. Sreenath, “Collaborative navigation and manipulation of a cable-towed load by multiple quadrupedal robots,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 10 041–10 048, 2022.
  11. S. Sukhbaatar, R. Fergus et al., “Learning multiagent communication with backpropagation,” Advances in neural information processing systems, vol. 29, 2016.
  12. Y. Lee, J. Yang, and J. J. Lim, “Learning to coordinate manipulation skills via skill behavior diversification,” in International conference on learning representations, 2019.
  13. F. De Vincenti and S. Coros, “Centralized model predictive control for collaborative loco-manipulation,” Proceedings of Robotics: Science and System XIX, p. 050, 2023.
  14. K. A. Hamed, V. R. Kamidi, A. Pandala, W.-L. Ma, and A. D. Ames, “Distributed feedback controllers for stable cooperative locomotion of quadrupedal robots: A virtual constraint approach,” in 2020 American Control Conference (ACC).   IEEE, 2020, pp. 5314–5321.
  15. Y. Ji, B. Zhang, and K. Sreenath, “Reinforcement learning for collaborative quadrupedal manipulation of a payload over challenging terrain,” in 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE).   IEEE, 2021, pp. 899–904.
  16. O. Nachum, M. Ahn, H. Ponte, S. Gu, and V. Kumar, “Multi-agent manipulation via locomotion using hierarchical sim2real,” arXiv preprint arXiv:1908.05224, 2019.
  17. B. Baker, I. Kanitscheider, T. Markov, Y. Wu, G. Powell, B. McGrew, and I. Mordatch, “Emergent tool use from multi-agent autocurricula,” arXiv preprint arXiv:1909.07528, 2019.
  18. H. Ha, J. Xu, and S. Song, “Learning a decentralized multi-arm motion planner,” arXiv preprint arXiv:2011.02608, 2020.
  19. J. Yang, A. Nakhaei, D. Isele, K. Fujimura, and H. Zha, “Cm3: Cooperative multi-goal multi-stage multi-agent reinforcement learning,” arXiv preprint arXiv:1809.05188, 2018.
  20. Q. Xu, J. Li, S. Koenig, and H. Ma, “Multi-goal multi-agent pickup and delivery,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2022, pp. 9964–9971.
  21. S. Zimmermann, G. Hakimifard, M. Zamora, R. Poranne, and S. Coros, “A multi-level optimization framework for simultaneous grasping and motion planning,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 2966–2972, 2020.
  22. J. Envall, R. Poranne, and S. Coros, “Differentiable task assignment and motion planning,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2023, pp. 2049–2056.
  23. F. Kennel-Maushart, R. Poranne, and S. Coros, “Interacting with multi-robot systems via mixed reality,” in 2023 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2023, pp. 11 633–11 639.
  24. C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 652–660.
  25. M. Bjelonic, C. D. Bellicoso, Y. de Viragh, D. Sako, F. D. Tresoldi, F. Jenelten, and M. Hutter, “Keep rollin’—whole-body motion control and planning for wheeled quadrupedal robots,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 2116–2123, 2019.
  26. gRPC, “About grpc,” https://grpc.io/about/, 2023.
  27. Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon, “Dynamic graph cnn for learning on point clouds,” ACM Transactions on Graphics (tog), vol. 38, no. 5, pp. 1–12, 2019.
  28. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  29. J. Lee, Y. Lee, J. Kim, A. Kosiorek, S. Choi, and Y. W. Teh, “Set transformer: A framework for attention-based permutation-invariant neural networks,” in International conference on machine learning.   PMLR, 2019, pp. 3744–3753.
  30. Y. Tang and D. Ha, “The sensory neuron as a transformer: Permutation-invariant neural networks for reinforcement learning,” Advances in Neural Information Processing Systems, vol. 34, pp. 22 574–22 587, 2021.
  31. Y. Yang, R. Luo, M. Li, M. Zhou, W. Zhang, and J. Wang, “Mean field multi-agent reinforcement learning,” in International conference on machine learning.   PMLR, 2018, pp. 5571–5580.
  32. Y. Li, L. Wang, J. Yang, E. Wang, Z. Wang, T. Zhao, and H. Zha, “Permutation invariant policy optimization for mean-field multi-agent reinforcement learning: A principled approach,” arXiv preprint arXiv:2105.08268, 2021.
  33. R. Zhang, G. Chen, J. Hou, Z. Li, and A. Knoll, “Pipo: Policy optimization with permutation-invariant constraint for distributed multi-robot navigation,” in 2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI).   IEEE, 2022, pp. 1–7.
  34. A. Zeng, S. Song, J. Lee, A. Rodriguez, and T. Funkhouser, “Tossingbot: Learning to throw arbitrary objects with residual physics,” IEEE Transactions on Robotics, vol. 36, no. 4, pp. 1307–1319, 2020.
  35. Y. Huang, A. Conkey, and T. Hermans, “Planning for multi-object manipulation with graph neural network relational classifiers,” in 2023 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2023, pp. 1822–1829.
  36. S. Shaw, E. Wenzel, A. Walker, and G. Sartoretti, “Formic: Foraging via multiagent rl with implicit communication,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4877–4884, 2022.
  37. J. Wu, X. Sun, A. Zeng, S. Song, S. Rusinkiewicz, and T. Funkhouser, “Spatial intention maps for multi-agent mobile manipulation,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 8749–8756.
  38. F. Schmalstieg, D. Honerkamp, T. Welschehold, and A. Valada, “Learning long-horizon robot exploration strategies for multi-object search in continuous action spaces,” in The International Symposium of Robotics Research.   Springer, 2022, pp. 52–66.
  39. J. Jiang and Z. Lu, “Learning attentional communication for multi-agent cooperation,” Advances in neural information processing systems, vol. 31, 2018.
  40. D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton et al., “Mastering the game of go without human knowledge,” nature, vol. 550, no. 7676, pp. 354–359, 2017.
  41. S. Liu, G. Lever, Z. Wang, J. Merel, S. Eslami, D. Hennes, W. M. Czarnecki, Y. Tassa, S. Omidshafiei, A. Abdolmaleki et al., “From motor control to team play in simulated humanoid football,” arXiv preprint arXiv:2105.12196, 2021.
  42. T. Haarnoja, B. Moran, G. Lever, S. H. Huang, D. Tirumala, M. Wulfmeier, J. Humplik, S. Tunyasuvunakool, N. Y. Siegel, R. Hafner et al., “Learning agile soccer skills for a bipedal robot with deep reinforcement learning,” arXiv preprint arXiv:2304.13653, 2023.
  43. T. Bansal, J. Pachocki, S. Sidor, I. Sutskever, and I. Mordatch, “Emergent complexity via multi-agent competition,” arXiv preprint arXiv:1710.03748, 2017.
  44. T. Ming, “Multi-agent reinforcement learning: Independent versus cooperative agents,” in Proceedings of the Tenth International Conference on Machine Learning (ICML 1993), San Francisco, CA, USA, 1993, pp. 330–337.
  45. T. Chu, J. Wang, L. Codecà, and Z. Li, “Multi-agent deep reinforcement learning for large-scale traffic signal control,” IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 3, pp. 1086–1095, 2019.
  46. C. S. de Witt, T. Gupta, D. Makoviichuk, V. Makoviychuk, P. H. Torr, M. Sun, and S. Whiteson, “Is independent learning all you need in the starcraft multi-agent challenge?” arXiv preprint arXiv:2011.09533, 2020.
  47. C. Yu, A. Velu, E. Vinitsky, J. Gao, Y. Wang, A. Bayen, and Y. Wu, “The surprising effectiveness of ppo in cooperative multi-agent games,” Advances in Neural Information Processing Systems, vol. 35, pp. 24 611–24 624, 2022.
  48. R. Lowe, Y. I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” Advances in neural information processing systems, vol. 30, 2017.
  49. P. Sunehag, G. Lever, A. Gruslys, W. M. Czarnecki, V. Zambaldi, M. Jaderberg, M. Lanctot, N. Sonnerat, J. Z. Leibo, K. Tuyls et al., “Value-decomposition networks for cooperative multi-agent learning,” arXiv preprint arXiv:1706.05296, 2017.
  50. T. Rashid, M. Samvelyan, C. S. De Witt, G. Farquhar, J. Foerster, and S. Whiteson, “Monotonic value function factorisation for deep multi-agent reinforcement learning,” The Journal of Machine Learning Research, vol. 21, no. 1, pp. 7234–7284, 2020.
  51. J. Wang, Z. Ren, T. Liu, Y. Yu, and C. Zhang, “Qplex: Duplex dueling multi-agent q-learning,” arXiv preprint arXiv:2008.01062, 2020.
  52. J. Wang, D. Ye, and Z. Lu, “More centralized training, still decentralized execution: Multi-agent conditional policy factorization,” arXiv preprint arXiv:2209.12681, 2022.
  53. Y. Zhou, S. Liu, Y. Qing, K. Chen, T. Zheng, Y. Huang, J. Song, and M. Song, “Is centralized training with decentralized execution framework centralized enough for marl?” arXiv preprint arXiv:2305.17352, 2023.
  54. X. Zhou, X. Wen, Z. Wang, Y. Gao, H. Li, Q. Wang, T. Yang, H. Lu, Y. Cao, C. Xu et al., “Swarm of micro flying robots in the wild,” Science Robotics, vol. 7, no. 66, p. eabm5954, 2022.
  55. J. Foerster, I. A. Assael, N. De Freitas, and S. Whiteson, “Learning to communicate with deep multi-agent reinforcement learning,” Advances in neural information processing systems, vol. 29, 2016.
  56. A. Singh, T. Jain, and S. Sukhbaatar, “Learning when to communicate at scale in multiagent cooperative and competitive tasks,” arXiv preprint arXiv:1812.09755, 2018.
  57. Y. Niu, R. R. Paleja, and M. C. Gombolay, “Multi-agent graph-attention communication and teaming.” in AAMAS, 2021, pp. 964–973.
  58. A. Das, T. Gervet, J. Romoff, D. Batra, D. Parikh, M. Rabbat, and J. Pineau, “Tarmac: Targeted multi-agent communication,” in International Conference on Machine Learning.   PMLR, 2019, pp. 1538–1546.
  59. C. Yu, X. Yang, J. Gao, J. Chen, Y. Li, J. Liu, Y. Xiang, R. Huang, H. Yang, Y. Wu et al., “Asynchronous multi-agent reinforcement learning for efficient real-time multi-robot cooperative exploration,” arXiv preprint arXiv:2301.03398, 2023.
  60. J. Lee, M. Bjelonic, and M. Hutter, “Control of wheeled-legged quadrupeds using deep reinforcement learning,” in Climbing and Walking Robots Conference.   Springer, 2022, pp. 119–127.
  61. M. Tranzatto, T. Miki, M. Dharmadhikari, L. Bernreiter, M. Kulkarni, F. Mascarich, O. Andersson, S. Khattak, M. Hutter, R. Siegwart et al., “Cerberus in the darpa subterranean challenge,” Science Robotics, vol. 7, no. 66, p. eabp9742, 2022.
  62. P.-W. Chou, D. Maturana, and S. Scherer, “Improving stochastic policy gradients in continuous control with deep reinforcement learning using the beta distribution,” in International conference on machine learning.   PMLR, 2017, pp. 834–843.
  63. I. G. Petrazzini and E. A. Antonelo, “Proximal policy optimization with continuous bounded action space via the beta distribution,” in 2021 IEEE Symposium Series on Computational Intelligence (SSCI).   IEEE, 2021, pp. 1–8.
  64. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
  65. F. De Vincenti and S. Coros, “Ungar - a c++ framework for real-time optimal control using template metaprogramming,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 6297–6303.
  66. W. Jakob, J. Rhinelander, and D. Moldovan, “pybind11 – seamless operability between c++11 and python,” 2017, https://github.com/pybind/pybind11.
  67. protobuf.dev, “Protocol buffers documentation,” https://protobuf.dev/, 2023.
Citations (2)

Summary

We haven't generated a summary for this paper yet.