Papers
Topics
Authors
Recent
2000 character limit reached

Pose-Free Feed-Forward 4D Reconstruction

Updated 5 January 2026
  • The paper introduces a feed-forward deep learning approach that reconstructs 4D spatiotemporal scenes from unconstrained videos without external camera poses.
  • It leverages transformer modules, memory banks, and implicit representations to handle dynamic scenes with high temporal coherence and efficiency.
  • The method enables real-time view synthesis, simulation, and asset creation, outperforming traditional pose-dependent pipelines.

Pose-free feed-forward 4D reconstruction refers to a rapidly maturing family of methods that reconstruct the spatiotemporal structure of dynamic scenes directly from unconstrained images or videos—without the need for either explicit external camera poses or expensive per-scene optimization. These approaches are primarily realized through deep learning architectures that consume monocular or multi-view video data; predict explicit or implicit 3D/4D scene representations; and enable applications such as unconstrained novel-view, novel-time synthesis, 4D asset creation, and simulation. The “feed-forward” attribute indicates that scene reconstruction proceeds in a fixed number of forward network passes, typically yielding interactive- to real-time performance, while "pose-free" signifies that all geometry and camera motion estimation is internal to the model.

1. Core Principles and Problem Formulation

Pose-free feed-forward 4D reconstruction strategies discard the historical reliance on externally provided 6-DoF camera poses and optimization-heavy multi-view geometry. Instead, they conjointly infer scene structure, camera motion, and (in many cases) per-object or per-pixel dynamics from the input data. These methods operate on the assumption that sufficient geometric and temporal information can be extracted from monocular or uncalibrated multiview videos by (a) learning strong geometric priors, (b) exploiting temporally coherent motion, and (c) leveraging powerful architectural modules—primarily transformers, memory banks, and auto-regressive inference pipelines (Lu et al., 30 Oct 2025, Chen et al., 2 Dec 2025, Chen et al., 18 Aug 2025).

Formally, the input may be a video clip {It}t=1T\{I_t\}_{t=1}^T or even just a single image. The target output is a dynamic 3D scene representation that may include:

  • Per-frame 3D Gaussian fields, 4D Gaussians (carrying explicit time dependence), pointmaps, dynamic neural radiance fields, or 6D video representations (joint RGB+XYZ).
  • Scene and/or object-centric rendering at arbitrary times and viewpoints, with internally regressed or implicit camera parameters.
  • For simulation-oriented methods, physical attributes and temporal evolution (e.g., through explicit simulation of predicted physical properties).

The pose-free property is enforced either by direct regression of camera parameters as network outputs, by formulating scene reconstruction in a canonical frame, or by employing representations where view specification need not be known a priori.

2. Leading Representation Families

The principal representation modalities are:

  • Pixel-aligned pointmaps: Each input frame is mapped through a transformer or encoder–decoder to predict a dense image-aligned map Pt(u)R3P_t(u) \in \mathbb{R}^3 for every pixel uu; camera pose is regressed or marginalized away (Zhang et al., 19 Jul 2025, Zhou et al., 20 Oct 2025).
  • 3D/4D Gaussian Splatting (GS): The scene is modeled as a dense or sparse field of 3D Gaussians, with parameters (mean, covariance, color, opacity, and lifespan/dynamics) inferred per input frame or jointly for a sequence (Chen et al., 2 Dec 2025, Yang et al., 1 Jan 2026, Huang et al., 23 Oct 2025, Zhao et al., 6 Aug 2025).
  • Implicit radiance/density fields: Neural networks predict radiance and occupancy as a function of (x,y,z,t)(x, y, z, t) without explicit per-frame parameterization (Yang et al., 1 Jan 2026).
  • Joint 6D video representations (RGB+XYZ): Latent-diffusion models produce a sequence of pixel-aligned RGB and XYZ maps, directly yielding dynamic point clouds (and hence 4D trajectories) (Chen et al., 18 Aug 2025).
  • Hybrid structures with motion modeling: Explicit motion vectors, state tokens, or velocity fields are used to propagate or interpolate scene elements across time, enabling continuous-time reconstruction and synthesis (Hu et al., 29 Sep 2025, Yang et al., 1 Jan 2026).

Many leading systems leverage memory mechanisms, such as dual-key spatial–directional memory (Huang et al., 23 Oct 2025), learnable state tokens (Hu et al., 29 Sep 2025), or lifespan heads (Chen et al., 2 Dec 2025, Yang et al., 1 Jan 2026) to maintain temporal coherence and object-level consistency.

3. Architectures and Inference Pipelines

A core enablement for pose-free feed-forward 4D reconstruction has been architectural innovations that fuse spatial, temporal, and geometric information while supporting both partial and online processing.

  • Transformer-based modules: Alternating intra-frame and cross-frame attention, combined with dynamic-aware masking, permit decoupling of camera and geometry information (e.g., as in PAGE-4D and VGGT variants (Zhou et al., 20 Oct 2025)).
  • Auto-regressive spatial/temporal expansion: SEE4D and similar pipelines employ trajectory-to-camera splines and sliding window inference to ensure scale and memory efficiency over long sequences (Lu et al., 30 Oct 2025).
  • Dual-head or multi-branch decoders: For physical simulation or rendering, architectures often include parallel decoders for geometry, physics (Young’s modulus EE, Poisson’s ratio ν\nu, density ρ\rho), and appearance (Lv et al., 19 Aug 2025).
  • Online memory mechanisms: OnlineSplatter’s dual-key memory module fuses temporally aggregated features and enforces spatial coverage, ensuring constant runtime and memory for arbitrarily long sequences (Huang et al., 23 Oct 2025).
  • Layer-wise scale alignment: To resolve scale drift caused by monocular ambiguity, LASER applies per-layer depth alignment and windowed Sim(3) registration, enabling real-time streaming operation (Ding et al., 15 Dec 2025).
  • Motion interpolation modules: Forge4D introduces dense 3D motion prediction coupled with occlusion-aware Gaussian fusion, supporting both reconstruction and continuous-time interpolation without explicit target pose/time specification (Hu et al., 29 Sep 2025).

Typically, inference consists of a single or fixed number of passes through the network, with optional lightweight post-alignment or fusion. Some pipelines support full online operation, with object updates at O(1)O(1) resource cost per frame (Huang et al., 23 Oct 2025, Ding et al., 15 Dec 2025).

4. Datasets, Protocols, and Quantitative Results

Recent methods are validated on both synthetic and real-world benchmarks covering static and highly dynamic content:

  • Static and dynamic multi-view: SynCamMaster, ReCamMaster, Scannet++, DL3DV-10K provide static and narrow-baseline geometry (Lu et al., 30 Oct 2025, Yang et al., 1 Jan 2026).
  • Dynamic and in-the-wild video: PointOdyssey, Waymo, nuScenes, ARKitScenes, VBench and DNA-Rendering emphasize dynamic and large-scale casual videos, human-centric scenarios, and high variability (Chen et al., 2 Dec 2025, Hu et al., 29 Sep 2025, Yang et al., 1 Jan 2026).
  • Evaluation metrics: PSNR, SSIM, LPIPS for photometric/structural fidelity; ATE (Absolute Trajectory Error), RPE (Relative Pose Error), Chamfer distance, scene flow L2 error, and temporal consistency scores for geometric and dynamic accuracy.
  • Empirical performance: SEE4D surpasses trajectory-based and pose-conditioned baselines in both 4D reconstruction and video generation tasks, e.g., outperforming TrajectoryCrafter on PSNR/SSIM/LPIPS and achieving top-2 across all VBench metrics (Lu et al., 30 Oct 2025). DGGT exceeds prior SOTA for dynamic driving scenes on PSNR and 3D flow metrics without input poses (Chen et al., 2 Dec 2025). OnlineSplatter demonstrates continual improvement as more frames are observed, exceeding pose-free baselines on GSO and HO3D (Huang et al., 23 Oct 2025). NeoVerse delivers superior generalization and runtime, with interactive performance on benchmark suites (Yang et al., 1 Jan 2026).

A representative summary table:

Method Input Modalities Core Output Notable Metrics (Best) Pose-Free Online Reference
SEE4D monocular video 4D multi-view inpainted video PSNR ↑, SSIM ↑, LPIPS ↓ - (Lu et al., 30 Oct 2025)
PhysGM single image 3DGS+physics sim CLIP_sim ↑, UPR - (Lv et al., 19 Aug 2025)
DGGT N unposed frames 4D Gaussian field PSNR, 3D-f low EPE - (Chen et al., 2 Dec 2025)
OnlineSplatter streaming RGB, no pose Online 3DGS object PSNR, LPIPS (Huang et al., 23 Oct 2025)
PAGE-4D dynamic video Pose, depth, pointmap AbsRel ↓, ATE ↓ - (Zhou et al., 20 Oct 2025)
NeoVerse monocular video 4DGS radiance/density field PSNR ↑, LPIPS ↓ - (Yang et al., 1 Jan 2026)
Forge4D sparse multi-view video Temporal 3DGS + interp PSNR, motion error - (Hu et al., 29 Sep 2025)

5. Advantages, Limitations, and Failure Modes

Advantages

Limitations

  • Depth ambiguity: Monocular depth and geometry priors can be unreliable in scenes with thin, reflective, or heavily occluded structures, leading to incorrect reconstructions (Lu et al., 30 Oct 2025, Yang et al., 1 Jan 2026).
  • Generalization to complex physics: Methods such as PhysGM currently predict a uniform material model for the entire object and do not fully capture spatially varying or articulated physics (Lv et al., 19 Aug 2025).
  • Memory and sequence length: Despite advances such as LASER and OnlineSplatter, very long or high-resolution sequences still present significant computational challenges (Ding et al., 15 Dec 2025, Huang et al., 23 Oct 2025).
  • Limited explicit control: Current approaches afford restricted or indirect control over physical parameters, rendering styles, or scene editability beyond camera/viewpoint specification (Chen et al., 18 Aug 2025, Yang et al., 1 Jan 2026).
  • Performance ceiling in challenging scenes: The accuracy gap to optimization-based methods persists under extreme dynamic complexity or minimal input supervision (Zhang et al., 19 Jul 2025).

Failure cases typically involve failure of monocular depth or motion consistency estimators, extreme occlusion, and viewpoint jumps exceeding the model’s generalization envelope (Lu et al., 30 Oct 2025, Yang et al., 1 Jan 2026, Huang et al., 23 Oct 2025).

6. Prospects and Future Research Directions

Several research avenues are actively pursued:

  • Hybrid and multi-modal supervision: Integration of self-supervised geometry refinement, multi-sensor fusion (LiDAR, event cameras), and textual/semantic priors to mitigate failure under poor geometric cues (Lu et al., 30 Oct 2025, Chen et al., 18 Aug 2025).
  • Richer physics modeling: Per-Gaussian or locally-varying physical property prediction and simulation (currently, only global attributes are predicted for objects) (Lv et al., 19 Aug 2025).
  • Streaming and online adaptation: Increased emphasis on constant-memory, causally aligned architectures for real-time, long-term scene reconstruction in robotics and AR/VR (Ding et al., 15 Dec 2025, Huang et al., 23 Oct 2025).
  • Generalization over scene types: Expanding coverage from object-centric to large-scale, unbounded, and highly articulated or fluid environments (Lv et al., 19 Aug 2025, Yang et al., 1 Jan 2026).
  • Fast feed-forward rendering: Diffusion–distillation and model compression for ultra-low-latency applications (e.g., real-time telepresence, autonomous robotics) (Lu et al., 30 Oct 2025).

Continued benchmarking on challenging datasets, tighter integration with world models and simulation, and progress in foundational representation learning are all expected to further shape the landscape of pose-free feed-forward 4D reconstruction.


References

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Pose-Free Feed-Forward 4D Reconstruction.