Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to flock through reinforcement (1911.01697v1)

Published 5 Nov 2019 in physics.soc-ph, cond-mat.stat-mech, and cs.MA

Abstract: Flocks of birds, schools of fish, insects swarms are examples of coordinated motion of a group that arises spontaneously from the action of many individuals. Here, we study flocking behavior from the viewpoint of multi-agent reinforcement learning. In this setting, a learning agent tries to keep contact with the group using as sensory input the velocity of its neighbors. This goal is pursued by each learning individual by exerting a limited control on its own direction of motion. By means of standard reinforcement learning algorithms we show that: i) a learning agent exposed to a group of teachers, i.e. hard-wired flocking agents, learns to follow them, and ii) that in the absence of teachers, a group of independently learning agents evolves towards a state where each agent knows how to flock. In both scenarios, i) and ii), the emergent policy (or navigation strategy) corresponds to the polar velocity alignment mechanism of the well-known Vicsek model. These results show that a) such a velocity alignment may have naturally evolved as an adaptive behavior that aims at minimizing the rate of neighbor loss, and b) prove that this alignment does not only favor (local) polar order, but it corresponds to best policy/strategy to keep group cohesion when the sensory input is limited to the velocity of neighboring agents. In short, to stay together, steer together.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Mihir Durve (20 papers)
  2. Fernando Peruani (42 papers)
  3. Antonio Celani (27 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.