MrCoM: Meta-Regularized Contextual World-Model
- The paper presents MrCoM, a model-based reinforcement learning framework that decomposes latent states and applies meta-regularization to achieve robust cross-scenario generalization.
- MrCoM employs a modularized architecture with a shallow Transformer for contextual encoding and a three-part latent-state decomposition handling stochastic, deterministic, and auxiliary elements.
- Empirical evaluations show MrCoM outperforms baselines in handling dynamics, reward, and observation perturbations, backed by theoretical error bounds on generalization.
The Meta-Regularized Contextual World-Model (MrCoM) is a model-based reinforcement learning (MBRL) framework that addresses generalization in multi-scenario settings by building a unified, meta-regularized world model. MrCoM isolates latent representations aligned with dynamic characteristics and scenario relevance, regularizes both state and value representations via meta-objectives, and provides theoretical guarantees on generalization gap. Empirical evaluations demonstrate that MrCoM attains superior generalization and robustness compared to contemporary world-model baselines under diverse alterations in environmental dynamics, rewards, and observations (Xiong et al., 9 Nov 2025).
1. Architecture and Components
MrCoM introduces a modularized architecture structured around scenario-agnostic and scenario-specific elements to facilitate cross-scenario transfer. The core elements are:
- Contextual Encoder: At each time step , a context window of length is ingested by a shallow Transformer (1–2 layers, 3 heads) to extract contextual embeddings. This architectural choice enables scenario-conditional inference and prediction.
- Latent-State Decomposition: The unified latent state is factorized into:
- (stochastic): Encodes aleatoric uncertainty; governed by a Gaussian prior and posterior .
- (deterministic): A recurrent hidden state evolving as .
- (auxiliary): Captures residual structure, with its own prior and posterior distributions.
A probabilistic decoder reconstructs the original observations, tying the latent components to observed data. All modules map diverse input scenarios to a shared latent space .
- Policy and Value Heads: The learning framework integrates (a) scenario-specific value heads , (b) a shared meta-value head , and (c) a policy , all operating on the unified latent embedding.
This design partitions scenario-relevant structure within and , captures temporal dependencies in , and ensures that policies and value estimations generalize across scenarios through a shared representation.
2. Meta-State Regularization
Meta-state regularization is designed to enforce that encodes only information in relevant given . To achieve this, MrCoM directly penalizes the conditional mutual information , effectively discouraging encoding of scenario-irrelevant noise.
Formally, employing a variational upper bound [Poole et al. 2019]:
The meta-state loss is:
This procedure strips of features from that cannot be predicted from context and action, yielding latent representations robust to irrelevant observation noise and scenario-specific peculiarities.
3. Meta-Value Regularization
Meta-value regularization aligns policy learning and world-model optimization across diverse objectives. It incorporates two core loss terms:
- Scenario-Specific Bellman Update:
This enforces value consistency per scenario.
- Meta-Value Alignment:
This loss encourages all scenario-specific values to align with a unified meta-value function.
- Meta-Value Rollout Consistency:
This tripartite value regularization ensures effective Bellman propagation in all scenarios and constrains the learned world-model to support meta-policy learning.
4. Generalization Error Bound
The theoretical framework established in MrCoM provides generalization error upper bounds under multi-scenario settings, assuming dynamics homogeneity and encoder approximation error .
- Lemma 1: Dynamics representation error:
- Lemma 2: Policy representation error:
- Lemma 3: Performance gap:
- Theorem 2:
The bound decomposes the total generalization error into contributions from dynamics modeling error (), encoder error (), and policy mismatch (). MrCoM’s regularization objectives map directly onto these error sources.
5. Training Algorithms and Procedural Details
The MrCoM training process consists of two stages:
- World-Model Training:
- Sample scenario , collect transitions using .
- Update with respect to (Bellman loss).
- Update policy via standard actor-critic updates on both true and model-simulated rollouts.
- Store pairs for meta-value alignment.
- 3. Update meta-value head using .
- 4. Optimize overall world-model loss:
Key hyperparameters: , batch size 32, learning rates (actor), (critic), rollout horizon , latent state sizes 128 per component.
Scenario Adaptation:
- For a new scenario , fix the world-model , learn policy and value heads via mixed real+simulated rollouts, optionally fine-tune with .
6. Empirical Evaluation and Results
Experiments are conducted on the MuJoCo-based DeepMind Control Suite (Hopper, Walker, Cheetah) with controlled scenario variations:
- Dynamics changes: Uniform random perturbation of limb size/length by , for .
- Reward changes: Randomization of target speed , .
Training is performed in a multi-scenario manner by merging trajectories from all environments and fitting a unified world-model. Baselines considered include DreamerV3, CaDM, and MAMBA.
- In-distribution and out-of-distribution generalization is evaluated by training and testing under disjoint perturbation settings (e.g., train at , test at ).
- Performance Comparison:
- MrCoM outperforms all baselines in 11/12 multi-scenario in-distribution runs and 11/12 out-of-distribution runs (see Table 1 in (Xiong et al., 9 Nov 2025)).
- Under pure dynamics shifts, MrCoM achieves the highest return in 5/6 cases.
- For observation corruptions (Gaussian noise, dimension addition, random masking), MrCoM attains top performance in 8/12 scenarios.
Ablation studies indicate that removal of any latent component (), context prompt (), meta-state loss (), or meta-value loss () degrades performance, with the context prompt being most crucial in the multi-scenario regime.
7. Context and Significance
MrCoM's unified world-model approach, three-fold latent decomposition, and regularization mechanisms are designed to meet the challenges of scenario transfer in MBRL by structurally decoupling scenario-dependent and -independent information. The explicit theoretical error bounds allow precise control of the sources of generalization loss, tightly linking architecture and training procedure to expected empirical performance. Main empirical findings demonstrate that its design increases robustness and transferability under broad changes in underlying transition dynamics, reward functions, and observation corruptions. A plausible implication is that this paradigm could provide a scalable route to robust MBRL in real-world, non-stationary domains where scenario variation is the norm.