Papers
Topics
Authors
Recent
2000 character limit reached

Timed Graph Relationformer (TGR) Layer

Updated 27 December 2025
  • Timed Graph Relationformer (TGR) Layer is a neural architecture that processes time-indexed, feature-annotated graphs by integrating local topological context with global set-level and relational information.
  • It combines multi-head graph attention, DeepSets, and Relation Net outputs via a learned gating mechanism and incorporates Time2Vec temporal encoding to produce permutation-invariant representations.
  • The TGR layer has been effectively applied to reinforcement learning scenarios like interactive swarm leader identification, demonstrating superior robustness and generalization over baseline GNN approaches.

The Timed Graph Relationformer (TGR) layer is a neural architecture for processing time-indexed, feature-annotated graphs. It is designed to generate informative, permutation-invariant global representations suitable for reinforcement learning with graph-structured observations. The TGR layer was introduced in the context of interactive Swarm Leader Identification (iSLI), where an agent must probe a robotic swarm to infer its leader, but its construction and data flow highlight a general approach to temporal graph representation learning (Bachoumas et al., 20 Dec 2025).

1. Data Flow and Architectural Modules

At each discrete time step kk, the TGR layer processes an observation encoded as a directed graph G^[k]=(H^[k],S^[k],R^[k],k)\hat{\mathcal{G}}[k] = (\hat{H}[k], \hat{S}[k], \hat{R}[k], k), where H^[k]R(N+1)×Dn\hat{H}[k] \in \mathbb{R}^{(N+1) \times D_n} is the node feature matrix (for NN swarm agents plus the prober), S^[k]\hat{S}[k] and R^[k]{0,1}(N+1)×(N+1)\hat{R}[k] \in \{0,1\}^{(N+1)\times(N+1)} are adjacency masks, and kk is the current timestep.

The TGR layer consists of the following modules, applied with a specific data flow:

  1. Multi-Head Graph Attention Transformer (GAT): Processes node features and adjacency information to produce updated node embeddings that integrate local topological context and edge weighting, outputting H^[k]\hat{H}'[k].
  2. DeepSets (DS) Readout: Computes a permutation-invariant, set-level summary of node features by aggregating transformed node embeddings.
  3. Relation Net (RN) Readout: Aggregates all pairwise node interactions, incorporating both node features and edge attributes for a relational summary.
  4. Gating Fusion: Combines DS and RN outputs via an element-wise, learned gating mechanism.
  5. Time2Vec (T2V) Temporal Encoding: Encodes the absolute timestep kk as a high-dimensional periodic/linear feature.

The outputs of the Gating Fusion and T2V components are concatenated to produce the final TGR global representation gTGR[k]\mathbf{g}_{\mathrm{TGR}}[k].

2. Forward Pass and Mathematical Formulation

The TGR layer's forward pass is precisely specified by the following sequence of operations:

  1. Graph Attention Transformer (GAT):

H^[k]=GAT(H^[k],S^[k],R^[k])\hat{H}'[k] = \mathrm{GAT}(\hat{H}[k],\,\hat{S}[k],\,\hat{R}[k])

For each node and head, query, key, and value projections are computed, with attention coefficients determined by masked and leaky-ReLUed softmax activations incorporating edge weights; outputs across heads are concatenated.

  1. DeepSets Global Read-Out:

gDS[k]=ρ ⁣(i=1N+1ϕ(h^i[k]))\mathbf{g}_{\mathrm{DS}}[k] = \rho\!\left(\sum_{i=1}^{N+1}\phi(\hat{h}'_i[k])\right)

where ϕ\phi and ρ\rho are MLPs.

  1. Relation Net Global Read-Out:

gRN[k]=ψ(i=1N+1j=1N+1θ(h^i[k]h^j[k]eij[k]))\mathbf{g}_{\mathrm{RN}}[k] = \psi\left(\sum_{i=1}^{N+1}\sum_{j=1}^{N+1} \theta(\hat{h}'_i[k]\Vert \hat{h}'_j[k]\Vert e_{i\to j}[k])\right)

where the edge feature eij[k]e_{i\to j}[k] captures information such as interaction counts.

  1. Learned Gating Fusion:

gGR[k]=gDS[k]σ(gRN[k])\mathbf{g}_{\mathrm{GR}}[k] = \mathbf{g}_{\mathrm{DS}}[k] \odot \sigma(\mathbf{g}_{\mathrm{RN}}[k])

with σ\sigma the elementwise sigmoid.

  1. Time2Vec Temporal Encoding:

τk=[w0k+b0    sin(w1k+b1)  sin(wDt1k+bDt1)]\tau_k = [w_0 k + b_0\; \Vert\; \sin(w_1 k + b_1)\; \Vert \dots \Vert \sin(w_{D_t-1}k + b_{D_t-1})]^\top

  1. Final Output:

gTGR[k]=gGR[k]    τk\mathbf{g}_{\mathrm{TGR}}[k] = \mathbf{g}_{\mathrm{GR}}[k]\; \Vert\; \tau_k

yielding a vector in RDg+Dt\mathbb{R}^{D_g+D_t}.

3. Gating Mechanism for Relational Fusion

The distinctive aspect of the TGR architecture is its gating fusion, which allows dynamic modulation between coarse set-level information (DS) and fine relational cues (RN) at each timestep. Each coordinate ii of the DS output is multiplied by a learned sigmoid gate gi[k]=σ(gRN,i[k])g_i[k] = \sigma(\mathbf{g}_{\mathrm{RN},i}[k]). This enables the RN to selectively amplify or suppress set-based features in response to relational context, such as the concentration of prober-swarm interactions. The gating mechanism is critical for integrating aggregate and relational information adaptively as the probing policy interacts with the swarm (Bachoumas et al., 20 Dec 2025).

4. Integration with Downstream Sequence Modeling and PPO

The output sequence {gTGR[0],,gTGR[k]}\{\mathbf{g}_{\mathrm{TGR}}[0], \ldots, \mathbf{g}_{\mathrm{TGR}}[k]\} is linearly projected and provided as the input token sequence to an S5 encoder, a structured state-space model. The S5 applies layer normalization, structured state-space updates, and residual connections internally. Its recurrent hidden state hkh_k summarizes past TGR-derived tokens. Two MLP heads—a policy (actor) and value (critic)—map the S5 encoding yky_k to the categorical policy over base velocities and value estimates.

Gradients from the PPO objective—including policy loss, value loss, and entropy bonus—flow through the actor and critic heads, S5, and into the TGR. All components, including GAT, DS, RN, T2V, are trained end-to-end to maximize expected clipped surrogate advantage (Bachoumas et al., 20 Dec 2025).

5. Implementation Details and Hyperparameters

The TGR layer's implementation was found to be robust across a range of graph sizes and swarm speeds. The following hyperparameter settings were used to reproduce results:

Module Specification Key Parameters
GAT Multi-head, edge-weighted h=4h=4 heads, d=64d=64 per head
DS (MLPs) Coarse aggregation 2 hidden layers, 256 units each
RN (MLPs) Pairwise relational reasoning 2 hidden layers, 256 units each
T2V Temporal encoding Dt=64D_t=64 (1 linear + 63 sinusoid)
Output dim Global, permutation-invariant Dg=256D_g=256, gTGRR320\mathbf{g}_{\mathrm{TGR}}\in\mathbb{R}^{320}
S5 encoder State-space sequence 4 layers, 256 hidden units
PPO RL optimization clip=0.2, entropy=0.01, lr=3e-4, batch=64, GAE λ=0.95\lambda=0.95, γ=0.99\gamma=0.99

Node- and edge-level features are supplied as raw input. Non-specified MLPs use Xavier initialization and LeakyReLU (slope 0.2). Simulation operates at 20 Hz; on-robot at 5 Hz.

6. Application to Swarm Leader Identification

The TGR layer serves as the core graph representation mechanism in the iSLI problem, enabling the learning of adversarially probing policies for leader detection under partially observable and dynamic conditions. It outperforms baseline GNN approaches by successfully fusing topological, interactional, and temporal structure, generalizing across swarm sizes and dynamics, and supporting robust sim-to-real transfer. The architecture is particularly well-suited for reinforcement learning settings where relational and set-aggregate information must be adaptively balanced to support sequential decision making (Bachoumas et al., 20 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Timed Graph Relationformer (TGR) Layer.