Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Leader-Follower Formation Control and Obstacle Avoidance via Deep Reinforcement Learning (1911.06882v1)

Published 15 Nov 2019 in cs.RO, cs.SY, and eess.SY

Abstract: We propose a deep reinforcement learning (DRL) methodology for the tracking, obstacle avoidance, and formation control of nonholonomic robots. By separating vision-based control into a perception module and a controller module, we can train a DRL agent without sophisticated physics or 3D modeling. In addition, the modular framework averts daunting retrains of an image-to-action end-to-end neural network, and provides flexibility in transferring the controller to different robots. First, we train a convolutional neural network (CNN) to accurately localize in an indoor setting with dynamic foreground/background. Then, we design a new DRL algorithm named Momentum Policy Gradient (MPG) for continuous control tasks and prove its convergence. We also show that MPG is robust at tracking varying leader movements and can naturally be extended to problems of formation control. Leveraging reward shaping, features such as collision and obstacle avoidance can be easily integrated into a DRL controller.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yanlin Zhou (19 papers)
  2. Fan Lu (41 papers)
  3. George Pu (7 papers)
  4. Xiyao Ma (6 papers)
  5. Runhan Sun (3 papers)
  6. Hsi-Yuan Chen (2 papers)
  7. Xiaolin Li (54 papers)
  8. Dapeng Wu (52 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.