Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sharing Lifelong Reinforcement Learning Knowledge via Modulating Masks (2305.10997v1)

Published 18 May 2023 in cs.LG, cs.AI, cs.DC, and cs.MA

Abstract: Lifelong learning agents aim to learn multiple tasks sequentially over a lifetime. This involves the ability to exploit previous knowledge when learning new tasks and to avoid forgetting. Modulating masks, a specific type of parameter isolation approach, have recently shown promise in both supervised and reinforcement learning. While lifelong learning algorithms have been investigated mainly within a single-agent approach, a question remains on how multiple agents can share lifelong learning knowledge with each other. We show that the parameter isolation mechanism used by modulating masks is particularly suitable for exchanging knowledge among agents in a distributed and decentralized system of lifelong learners. The key idea is that the isolation of specific task knowledge to specific masks allows agents to transfer only specific knowledge on-demand, resulting in robust and effective distributed lifelong learning. We assume fully distributed and asynchronous scenarios with dynamic agent numbers and connectivity. An on-demand communication protocol ensures agents query their peers for specific masks to be transferred and integrated into their policies when facing each task. Experiments indicate that on-demand mask communication is an effective way to implement distributed lifelong reinforcement learning and provides a lifelong learning benefit with respect to distributed RL baselines such as DD-PPO, IMPALA, and PPO+EWC. The system is particularly robust to connection drops and demonstrates rapid learning due to knowledge exchange.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Saptarshi Nath (4 papers)
  2. Christos Peridis (4 papers)
  3. Eseoghene Ben-Iwhiwhu (6 papers)
  4. Xinran Liu (32 papers)
  5. Shirin Dora (8 papers)
  6. Cong Liu (169 papers)
  7. Soheil Kolouri (71 papers)
  8. Andrea Soltoggio (20 papers)
Citations (6)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub