Embodied-SlotSSM: Scalable Object-Centric SSM
- Embodied-SlotSSM is a scalable object-centric state-space modeling framework that uses slot attention to persistently track objects in complex, partially observable environments.
- It fuses per-slot state-space models with a relational cross-attention encoder to enable temporally aware action prediction and robust memory integration.
- Empirical evaluations on LIBERO benchmarks show significant improvements in subgoal completion and object tracking accuracy in non-Markovian tasks.
Embodied-SlotSSM is a scalable object-centric state-space modeling framework designed for temporally grounded reasoning in complex, partially observable, and non-Markovian embodied settings. It integrates slot attention for persistently tracking object identities, per-slot state-space models (SSMs) for object-level memory, and a relational cross-attention encoder to support temporally aware action prediction conditioned on both visual and language goals (Chung et al., 14 Nov 2025).
1. Motivation and Problem Setting
Modern embodied agents, particularly in robotic manipulation, frequently encounter scenarios where the environment is only partially observable and task-relevant information is not immediately accessible in the current observation. In such non-Markovian settings, the agent's policy must access detailed per-object histories—such as which object has been interacted with and in what sequence—to disambiguate between visually similar scenes requiring different actions. The LIBERO-Mem benchmark, introduced in (Chung et al., 14 Nov 2025), formalizes this challenge with tasks that require long-horizon, object-specific memory and subgoal management, exposing the limitations of conventional vision-language-action (VLA) architectures. Existing VLA models exhibit marked performance degradation on these memory-intensive tasks (subgoal completion typically <5%) due to the inability of dense token-based representations to scale beyond a few hundred frames without intractable memory or attention costs.
2. Core Components of Embodied-SlotSSM
Embodied-SlotSSM fuses three architectural principles for spatio-temporal, object-centric memory:
- Slot Attention-based Perception: At each timestep, a frozen or learned visual encoder extracts dense patch or feature map tokens from the observation. Slot attention transforms the scene features into object-centric slot embeddings , each intended to represent a persistent entity over long timescales.
- Slot-State-Space Modeling (Slot-SSM): For each slot , a dedicated state-space model maintains its own hidden state , updating via input-conditioned, block-diagonal affine maps:
This mechanism imposes temporal persistence on each slot's memory trace, enabling tracking of both visible and occluded objects, and supports structured recall/regeneration of short-horizon slot trajectories.
- Relational Encoder for Action Decoding: To exploit structured slot-level memory, a relational encoder applies cross-attention layers where slot-fused codes (which integrate current, predicted-next, and goal embeddings for each slot) act as queries and the current visual feature tokens as keys/values. The resulting relation tokens encode object-scene interactions vital for grounding language- or subgoal-conditioned action selection.
3. Formal Algorithm and Dataflow
Given a video frame and a global language goal :
- Visual Encoding: pretrained CNN
- Slot Attention: initialized (randomly or from ), then refined for steps
- Slot-SSM Update:
- Short-term Reconstruction: Predict from via shared MLP
- SlotFusion & Relational Encoding: SlotFusion; relational encoder computes relation tokens
- Action Decoding: VLA head
- Loss Computation: Cross-entropy for action prediction, temporal contrastive for slot consistency, and mean-squared error for SSM reconstruction
Block diagram:
| Stage | Input(s) | Output(s) |
|---|---|---|
| Observation | Frame | CNN features |
| Slot Attention | Slots | |
| Slot-SSM | Updated states | |
| Relational Enc. | Relations | |
| Action Decoder | Action logits |
4. Training, Loss Functions, and Hyperparameters
The total training objective is a weighted sum of:
- Slot Temporal Contrastive Loss: Encourages consistent slot identity over small time horizons, crucial for avoiding slot collapse or identity drift.
- Slot-SSM Windowed Reconstruction Loss: Enforces accurate reconstruction of per-slot latent trajectories over a local window.
- Action Cross-Entropy Loss: Standard for supervised policy learning, with input tokens composed of relation and slot-fused codes.
Hyperparameters validated in (Chung et al., 14 Nov 2025) include: AdamW optimizer (lr , weight decay ), batch size of 16 trajectories, slots of dim 64, hidden state dim 128, window size , , 3 relational layers with 4 heads, and MLP dropout .
5. Empirical Evaluation and Ablation
Experiments on LIBERO-Goal (general) and LIBERO-Mem (non-Markovian memory tasks) demonstrate:
- Improvements over Dense/Slot VLA Models: On LIBERO-Goal, Embodied-SlotSSM achieves 80.1% average success (vs. 66.5% for SlotVLA ) with consistent gains on Markovian and non-Markovian benchmarks.
- Non-Markovian Task Superiority: Subgoal completion on LIBERO-Mem is 14.8% for Embodied-SlotSSM versus 5.0% for dense baselines and SlotVLA, with per-task completion as high as 50% on the easiest memory-demanding tasks.
- Ablation Findings: Removing the Slot-SSM loss drops LIBERO-Mem performance by nearly half; removing the relational encoder also substantially reduces subgoal completion. Increasing the reconstruction window beyond yields diminishing returns due to increased optimization demand.
- Qualitative Results: Visualization of slot heatmaps confirms sustained object identity through occlusion and long horizons; reconstructed slot trajectories closely match ground truth motion profiles.
6. Architectural Relations and Comparisons
Embodied-SlotSSM extends the Slot Structured World Model paradigm (Collu et al., 8 Jan 2024) by:
- Substituting a classical slot-GNN with per-slot state-space models (block-diagonal, slot-wise dynamics)
- Integrating a relational cross-attention encoder supporting vision-language-action fusion
- Utilizing structured, multi-frame slot reconstruction to stabilize slot identity
Unlike prior Transformer-based video world models that use a small set of transformer slots but rely exclusively on cross- and self-attention for memory (Petri et al., 30 May 2024), Embodied-SlotSSM enforces explicit slot memory via SSMs and is directly evaluated in embodied, agent-in-the-loop settings (robotic manipulation). The absence of action-conditioning, compositional goal representations, and agent interactivity in earlier works delineates the distinct focus and novelty of Embodied-SlotSSM.
7. Limitations and Prospects
Current instantiations employ oracle subgoal embeddings (), limiting autonomous subgoal inference. The approach is demonstrated in simulation and requires domain adaptation for real-world transfer. Scaling to larger numbers of objects () or longer horizons () is constrained by token and memory bottlenecks, motivating future work in hierarchical SSMs and slot compression. Extensions to continuous control, integrated subgoal inference via LLMs, and broader foundation model integration are identified as next steps (Chung et al., 14 Nov 2025).
8. Related Frameworks and Empirical Context
The Slot Structured World Models (SSWM) of (Collu et al., 8 Jan 2024) combine a pixel-space slot attention encoder with a latent graph neural network (GNN) for object interaction and action-conditioned prediction. SSWM achieves markedly superior long-horizon object prediction compared to C-SWM, as measured by Hits@1 and MRR, by maintaining object-level factorization and using frozen slot attention for dynamics learning.
In contrast, (Petri et al., 30 May 2024) introduces a Transformer world model (FPTT) with token-based VQ encodings and slot-structured cross/self-attention, but without classic slot-attention or agent-action conditioning. FPTT improves sample efficiency and predictive stability but is not evaluated in embodied settings.
Embodied-SlotSSM brings together these architectural branches—persistently object-centric, memory-rich, and relationally compositional modeling—to address long-term, ambiguous, object-conditional reasoning and action prediction in real-world-inspired robotic domains.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free