Strongly Coordinated Multi-Agent Option Discovery

Develop option discovery techniques for cooperative multi-agent reinforcement learning that produce joint options exhibiting strong inter-agent coordination, capturing inter-agent dependencies and correlations beyond the loosely coupled behaviors obtained by Kronecker-product-based covering options, to address the unresolved challenge of discovering strongly coordinated behaviors.

Background

In multi-agent reinforcement learning, existing macro-action approaches often rely on local (single-agent) options or construct joint options via Kronecker products of single-agent transition graphs. These methods tend to synchronise independent behaviors and fail to capture strong inter-agent dependencies or correlations, limiting their ability to produce truly coordinated joint behaviors.

The paper notes that this limitation leaves the discovery of strongly coordinated behaviors as an open challenge. While the authors propose a method aimed at addressing this gap by leveraging inter-agent relative representations and Laplacian-based option discovery, the broader problem of reliably discovering such strongly coordinated joint options remains a recognized unresolved issue in the literature.

References

This leaves the problem of discovering strongly coordinated behaviours still open, which is precisely what we aim to address with our method.

Discovering Coordinated Joint Options via Inter-Agent Relative Dynamics  (2512.24827 - Steleac et al., 31 Dec 2025) in Related Work, Section "Temporally extended actions for multi-agent systems"