Papers
Topics
Authors
Recent
Search
2000 character limit reached

Cross-Embodiment Robot Data

Updated 3 February 2026
  • Cross-embodiment robot data are datasets and representations that enable learning and control across robots with varied morphologies and sensorimotor constraints.
  • They employ particle-based and functionally abstracted world-state representations with graph neural networks to model and predict multiparticle dynamics.
  • Integration of simulation, real, and human data supports zero-/few-shot policy transfer, improving generalization in dexterous manipulation tasks.

Cross-embodiment robot data refers to datasets and representations explicitly constructed or transformed to support learning, generalization, and control across robots with diverse morphologies, actuation schemes, kinematic chains, and sensing modalities. By abstracting, aligning, or normalizing state and action spaces, these data pipelines aim to enable policy sharing and generalization—from simulation to real hardware, from one hand model to another, or between human and robot demonstrations—in manipulation and dexterous control. Key challenges addressed include heterogeneity in joint structure, action conventions, geometric scale, and embodiment-specific sensorimotor constraints.

1. Particle-Based and Functionally Abstracted World-State Representations

One approach to cross-embodiment data is to reparameterize both the robot and the environment in an embodiment-invariant state and action space. In "Scaling Cross-Embodiment World Models for Dexterous Manipulation" (He et al., 3 Nov 2025), every robot and human hand (e.g., 6-DoF Ability Hand, 24-DoF Shadow Hand, or human) is represented as an unordered set of 3D particles: Xt(e)={xi,t(e)R3}i=1Ne,Xt(o)={xj,t(o)R3}j=1NoX_t^{(e)} = \bigl\{x^{(e)}_{i,t} \in \mathbb{R}^3\bigr\}_{i=1}^{N_e}, \quad X_t^{(o)} = \bigl\{ x^{(o)}_{j,t} \in \mathbb{R}^3\bigr\}_{j=1}^{N_o} The world-state is Xt=(Xt(e),Xt(o))(R3)Ne+NoX_t = (X_t^{(e)}, X_t^{(o)}) \in (\mathbb{R}^3)^{N_e+N_o}.

Actions are defined as per-particle displacements: atP=ΔXt(e)={δi,tR3}i=1Nea_t^P = \Delta X_t^{(e)} = \left\{\delta_{i,t} \in \mathbb{R}^3\right\}_{i=1}^{N_e} Given joint-space commands utu_t, forward kinematics maps utu_t to atPa_t^P by: Xt(e)=Φe(qt),Xt+1(e)=Φe(qt+ut),atP=Xt+1(e)Xt(e)X_t^{(e)} = \Phi_e(q_t),\quad X_{t+1}^{(e)} = \Phi_e(q_t + u_t),\quad a_t^P = X_{t+1}^{(e)} - X_t^{(e)} This representation abstracts away the specifics of the underlying kinematic chain and creates a unified input for downstream dynamics models and planners.

Functional abstraction can also be achieved via contact and surface representations, as in CEI (Wu et al., 14 Jan 2026), where the functional similarity between embodiments is quantified using Directional Chamfer Distance between surface point–normal pairs derived from forward kinematics. This enables synthesis of trajectory-aligned datasets for new robot morphologies regardless of hardware-specific DOFs.

2. Cross-Embodiment Learning Architectures and Objective Functions

Graph Neural Networks (GNNs) operating on these shared state representations have been leveraged to model multiparticle dynamics for world modeling across diverse robot hands. In (He et al., 3 Nov 2025), the DPI-Net–style radius-graph's nodes encode both per-particle geometry and ownership (robot vs. object), while edges connect particles within a fixed radius and encode geometric and relational features. The message-passing update equations are: ϵk,tl=fE(ck,te,huk,tl1,hvk,tl1),hi,tl=fO(ci,to,k:vk=iϵk,tl,hi,tl1)\epsilon_{k,t}^l = f_E(c_{k,t}^e, h_{u_k,t}^{l-1}, h_{v_k,t}^{l-1}), \quad h_{i,t}^l = f_O(c_{i,t}^o, \sum_{k: v_k=i} \epsilon_{k,t}^l, h_{i,t}^{l-1}) The node-wise decoder predicts the next-step particle state.

Learning is supervised: the model parameters θ minimize

θ=argminθ  E(Xt,atP,Xt+1)[L(f^θ(Xt,atP),Xt+1)]\theta^* = \arg\min_\theta \; \mathbb{E}_{(X_t,\,a_t^P,\,X_{t+1})} \bigl[ \mathcal{L}( \hat f_\theta(X_t, a_t^P), X_{t+1} ) \bigr]

L\mathcal{L} is MSE\ell_{\mathrm{MSE}} for particle-corresponded data (simulation), or Chamfer / Earth Mover’s Distance for unpaired (e.g., human) scans.

Other datasets employ conditional VAEs for bridging human and robot grasp data; CEDex (Wu et al., 29 Sep 2025) trains a CVAE to map object point clouds to human-like contact and part maps, which are then aligned, merged, and optimized using signed-distance field constraints to generate physically valid robot grasps. Downstream policies are trained using behavioral cloning or denoising-diffusion objectives, e.g.: LDM=E(o,a),t,ϵaψ(o,a+σtϵ,t)22L_{\mathrm{DM}} = \mathbb{E}_{(o, a), t, \epsilon} \, \| a - \psi( o, a+\sigma_t\epsilon, t ) \|_2^2

3. Dataset Construction: Simulation, Real, and Human Data Aggregation

Cross-embodiment data pipelines combine simulated, real-robot, and human datasets. (He et al., 3 Nov 2025) aggregates six robot hands (ranging from 6 to 24 DoF, in SAPIEN and Rewarped simulators) on rigid and deformable tasks, contributing ~30k transitions per task. Multi-view, markerless human hand motion is captured with POEM-v2 and reconstructed into particle sets, yielding ~10k triplets per skill primitive.

The OXE-AugE project (Ji et al., 15 Dec 2025) employs a scalable augmentation pipeline that "cross-paints" robot videos and poses from one embodiment onto another via segmentation (SAM2), background inpainting (E²FGVI), and MuJoCo-based pose alignment, resulting in a 4.44 million–trajectory, 9-robot dataset balanced in entropy (H≈ln 9) and covering extensive task and robot morphology diversity.

Table: Representative Datasets Used in Cross-Embodiment Learning

Dataset / System Robot Types Human Data Modes Scale
(He et al., 3 Nov 2025) 6 simulated, 2 real hands POEM-v2 Manipulation ~30k/task
CEDex (Wu et al., 29 Sep 2025) Barrett, Robotiq, Allegro, Shadow Yes Grasping 20M grasps
OXE-AugE (Ji et al., 15 Dec 2025) 9 augmented arms + grippers No Visuomotor 4.44M traj
CEI (Wu et al., 14 Jan 2026) 16 target morphologies No Manipulation 400 demos

Real-robot deployments employ hardware such as Ability Hand and XHand on 7-DoF arms recorded with multi-camera setups (He et al., 3 Nov 2025).

4. Empirical Findings: Scaling, Generalization, and Transfer Performance

Extensive quantitative studies demonstrate several scaling laws and cross-embodiment generalization phenomena. In (He et al., 3 Nov 2025), average particle MSE on held-out test embodiments decreases monotonically as the number xx of training hands increases from 1 to 6, with "scaling law" behavior: with five-source training, held-out (zero-shot) error rivals or surpasses overfitting on the target hand alone. Adding simulated and real data sources further reduces error by 5–10%; the effect is especially pronounced for soft-body manipulation (up to 20% reduction in MSE).

In OXE-AugE (Ji et al., 15 Dec 2025), augmenting policy training with diverse robot data yields superlinear gains: "+9–19%" in pairwise transfer success and "+30–40%" in generalization to unseen platforms. Real-world deployments of fine-tuned foundation models (OpenVLA, π₀) improve bridge-task success by 24–45% on unseen arm–gripper combinations, with standard errors <6%. CEDex (Wu et al., 29 Sep 2025) reports average grasp success of 88.7% versus 87.5% for SOTA, and diversity improvement from 0.450 to 0.512 radians, attributing +3.5% absolute policy improvement in SOTA learning-based grasp networks to the new cross-embodiment data.

Zero-shot and few-shot transfer become feasible: in (He et al., 3 Nov 2025), model-predictive planning using the GNN world model on real hardware achieves a Chamfer error of 6.9×103±0.16.9 \times 10^{-3} \pm 0.1 (co-trained) without any per-hand fine-tuning.

5. Integration with Model-Based and Policy Planning

To close the learning–execution loop, cross-embodiment world models are integrated with model-predictive planners. (He et al., 3 Nov 2025) utilizes the Cross-Entropy Method (CEM) for action sampling over low-dimensional primitive spaces, mapping action sequences to particle-displacement sequences via forward kinematics, rolling out the learned GNN, and evaluating terminal costs as the Chamfer distance between object particle clouds and task goals. CEM is run for 5 iterations and 512 samples/iteration, selecting the top candidate without additional fine-tuning for new hardware.

This architecture enables execution of zero- or few-shot manipulation policies across diverse hands and objects—rigid and deformable—demonstrating that a single learned world model can be deployed as a universal cross-embodiment interface.

6. Implications, Best Practices, and Open Directions

Recent results suggest that environment dynamics can be modeled in an embodiment-invariant way and that cross-embodiment data—appropriately represented and fused—can be leveraged to build genuinely generalist control policies. Key recommendations supported by empirical data:

  • Represent states and actions in a geometric or functionally abstracted space (e.g., particles, contact surfaces, or functional metrics) to maximize data re-use.
  • Co-train models on diverse combinations of simulation, real-robot, and human demonstrations to improve generalization, particularly for complex or deformable tasks.
  • Utilize scalable augmentation pipelines (segmentation, inpainting, re-rendering) to balance datasets and equalize embodiment distribution entropy.
  • Integrate learned world models with sampling-based or differentiable planners to facilitate zero-/few-shot deployment.

Outstanding challenges include extending these frameworks to highly articulated, compliant, or underactuated embodiments, relaxing reliance on accurate forward kinematics, and unifying spatial, temporal, and multimodal augmentations for further cross-embodiment robustness.

References:

  • "Scaling Cross-Embodiment World Models for Dexterous Manipulation" (He et al., 3 Nov 2025)
  • "CEDex: Cross-Embodiment Dexterous Grasp Generation at Scale from Human-like Contact Representations" (Wu et al., 29 Sep 2025)
  • "OXE-AugE: A Large-Scale Robot Augmentation of OXE for Scaling Cross-Embodiment Policy Learning" (Ji et al., 15 Dec 2025)
  • "CEI: A Unified Interface for Cross-Embodiment Visuomotor Policy Learning in 3D Space" (Wu et al., 14 Jan 2026)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Cross-Embodiment Robot Data.