Socially Integrated Navigation: A Social Acting Robot with Deep Reinforcement Learning (2403.09793v3)
Abstract: Mobile robots are being used on a large scale in various crowded situations and become part of our society. The socially acceptable navigation behavior of a mobile robot with individual human consideration is an essential requirement for scalable applications and human acceptance. Deep Reinforcement Learning (DRL) approaches are recently used to learn a robot's navigation policy and to model the complex interactions between robots and humans. We propose to divide existing DRL-based navigation approaches based on the robot's exhibited social behavior and distinguish between social collision avoidance with a lack of social behavior and socially aware approaches with explicit predefined social behavior. In addition, we propose a novel socially integrated navigation approach where the robot's social behavior is adaptive and emerges from the interaction with humans. The formulation of our approach is derived from a sociological definition, which states that social acting is oriented toward the acting of others. The DRL policy is trained in an environment where other agents interact socially integrated and reward the robot's behavior individually. The simulation results indicate that the proposed socially integrated navigation approach outperforms a socially aware approach in terms of ego navigation performance while significantly reducing the negative impact on all agents within the environment.
- F. Cavallo, F. Semeraro, L. Fiorini, G. Magyar, P. Sinčák, and P. Dario, “Emotion modelling for social robotics applications: A review,” Journal of Bionic Engineering, vol. 15, no. 2, pp. 185–203, 2018.
- E. Alao and P. Martinet, “Uncertainty-aware navigation in crowded environment,” in 17th International Conference on Control, Automation, Robotics and Vision (ICARCV). IEEE, 2022, pp. 293–298.
- B. Varga, D. Yang, M. Martin, and S. Hohmann, “Cooperative decision-making in shared spaces: Making urban traffic safer through human-machine cooperation,” in 2023 IEEE 21st Jubilee International Symposium on Intelligent Systems and Informatics (SISY). IEEE, 2023, pp. 000 109–000 114.
- T. Genevois, A. Spalanzani, and C. Laugier, “Interaction-aware predictive collision detector for human-aware collision avoidance,” in 2023 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2023, pp. 1–7.
- D. Gonon and A. Billard, “Inverse reinforcement learning of pedestrian–robot coordination,” IEEE Robotics and Automation Letters, vol. 8, no. 8, pp. 4815–4822, 2023.
- J. Rios-Martinez, A. Spalanzani, and C. Laugier, “From proxemics theory to socially-aware navigation: A survey,” International Journal of Social Robotics, vol. 7, no. 2, pp. 137–153, 2015.
- T. Fong, I. Nourbakhsh, and K. Dautenhahn, “A survey of socially interactive robots,” Robotics and Autonomous Systems, no. 3-4, pp. 143–166, 2003.
- C. Mavrogiannis, F. Baldini, A. Wang, D. Zhao, P. Trautman, A. Steinfeld, and J. Oh, “Core challenges of social robot navigation: A survey,” 09.03.2021.
- K. Zhu, B. Li, W. Zhe, and T. Zhang, “Collision avoidance among dense heterogeneous agents using deep reinforcement learning,” IEEE Robotics and Automation Letters, vol. 8, no. 1, pp. 57–64, 2023.
- K. Dautenhahn, “Socially intelligent robots: dimensions of human-robot interaction,” Philosophical transactions of the Royal Society of London. Series B, Biological sciences, no. 1480, pp. 679–704, 2007.
- K. Zhu and T. Zhang, “Deep reinforcement learning based mobile robot navigation: A review,” Tsinghua Science and Technology, vol. 26, no. 5, pp. 674–691, 2021.
- P. Trautman and A. Krause, “Unfreezing the robot: Navigation in dense, interacting crowds,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2010, pp. 797–803.
- G. Galati, S. Primatesta, S. Grammatico, S. Macrì, and A. Rizzo, “Game theoretical trajectory planning enhances social acceptability of robots by humans,” Scientific reports, vol. 12, no. 1, p. 21976, 2022.
- Helbing and Molnár, “Social force model for pedestrian dynamics,” Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics, vol. 51, no. 5, pp. 4282–4286, 1995.
- J. van den Berg, S. J. Guy, M. Lin, and D. Manocha, “Reciprocal n-body collision avoidance,” in Robotics Research, ser. Springer Tracts in Advanced Robotics, B. Siciliano, O. Khatib, F. Groen, C. Pradalier, R. Siegwart, and G. Hirzinger, Eds. Springer Berlin Heidelberg, 2011, vol. 70, pp. 3–19.
- Y. F. Chen, M. Liu, M. Everett, and J. P. How, “Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning,” in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 285–292.
- B. Kim and J. Pineau, “Socially adaptive path planning in human environments using inverse reinforcement learning,” International Journal of Social Robotics, vol. 8, no. 1, pp. 51–66, 2016.
- T. Fan, X. Cheng, J. Pan, D. Manocha, and R. Yang, “Crowdmove: Autonomous mapless navigation in crowded scenarios,” 2018.
- L. Kastner, Z. Shen, C. Marx, and J. Lambrecht, “Autonomous navigation in complex environments using memory-aided deep reinforcement learning,” in 2021 IEEE/SICE International Symposium on System Integration (SII). IEEE, 2021, pp. 170–175.
- B. Brito, M. Everett, J. P. How, and J. Alonso-Mora, “Where to go next: Learning a subgoal recommendation policy for navigation among pedestrians,” 25.02.2021.
- M. Everett, Y. F. Chen, and J. P. How, “Motion planning among dynamic, decision-making agents with deep reinforcement learning,” in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 3052–3059.
- M. Everett, Y. F. Chen, and J. P. How, “Collision avoidance in pedestrian-rich environments with deep reinforcement learning,” IEEE Access, vol. 9, pp. 10 357–10 377, 2021.
- C. Chen, S. Hu, P. Nikdel, G. Mori, and M. Savva, “Relational graph learning for crowd navigation,” 2020.
- Christoforos I. Mavrogiannis, Valts Blukis, and Ross A. Knepper, “Socially competent navigation planning by deep learning of multi-agent path topologies,” 2017.
- S. Rossi, A. Rossi, and K. Dautenhahn, “The secret life of robots: Perspectives and challenges for robot’s behaviours during non-interactive tasks,” International Journal of Social Robotics, vol. 12, no. 6, pp. 1265–1278, 2020.
- S. Guillén-Ruiz, J. P. Bandera, A. Hidalgo-Paniagua, and A. Bandera, “Evolution of socially-aware robot navigation,” Electronics, vol. 12, no. 7, p. 1570, 2023.
- P. T. Singamaneni, P. Bachiller-Burgos, L. J. Manso, A. Garrell, A. Sanfeliu, A. Spalanzani, and R. Alami, “A survey on socially aware robot navigation: Taxonomy and future challenges,” The International Journal of Robotics Research, 2024.
- N. Ah Sen, P. Carreno-Medrano, and D. Kulić, “Human-aware subgoal generation in crowded indoor environments,” in Social Robotics, ser. Lecture Notes in Computer Science, F. Cavallo, J.-J. Cabibihan, L. Fiorini, A. Sorrentino, H. He, X. Liu, Y. Matsumoto, and S. S. Ge, Eds. Springer Nature Switzerland, 2022, vol. 13817, pp. 50–60.
- Y. Chen, C. Liu, M. Liu, and B. E. Shi, “Robot navigation in crowds by graph convolutional networks with attention learned from human gaze,” 23.09.2019.
- Y. Zhou and J. Garcke, “Learning crowd behaviors in navigation with attention-based spatial-temporal graphs,” 11.01.2024.
- S. S. Samsani and M. S. Muhammad, “Socially compliant robot navigation in crowded environment by human behavior resemblance using deep reinforcement learning,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 5223–5230, 2021.
- B. Xue, M. Gao, C. Wang, Y. Cheng, and F. Zhou, “Crowd-aware socially compliant robot navigation via deep reinforcement learning,” International Journal of Social Robotics, 2023.
- H. Yang, C. Yao, C. Liu, and Q. Chen, “Rmrl: Robot navigation in crowd environments with risk map-based deep reinforcement learning,” IEEE Robotics and Automation Letters, vol. 8, no. 12, pp. 7930–7937, 2023.
- A. B. Richard Sutton, “Reinforcement learning: An introduction: second edition,” 2020.
- S. Yao, G. Chen, Q. Qiu, J. Ma, X. Chen, and J. Ji, “Crowd-aware robot navigation for pedestrians with multiple collision avoidance strategies via map-based deep reinforcement learning,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021, pp. 8144–8150.
- J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” 20.07.2017.
- M. Towers, J. K. Terry, A. Kwiatkowski, J. U. Balis, G. de Cola, T. Deleu, M. Goulão, A. Kallinteris, KG, Arjun, M. Krimmel, R. Perez-Vicente, A. Pierré, S. Schulhoff, J. J. Tai, A. T. J. Shen, and O. G. Younis, “Gymnasium,” 2023.
- Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann, “Stable-baselines3: Reliable reinforcement learning implementations,” Journal of Machine Learning Research, vol. 22, no. 268, pp. 1–8, 2021.
- Daniel Flögel (4 papers)
- Lars Fischer (8 papers)
- Thomas Rudolf (3 papers)
- Tobias Schürmann (2 papers)
- Sören Hohmann (49 papers)