Papers
Topics
Authors
Recent
Search
2000 character limit reached

Stereo Multi-State Constraint Kalman Filter

Updated 31 March 2026
  • S-MSCKF is a visual-inertial odometry framework that leverages stereo pair constraints to resolve scene scale and improve robustness under challenging motion conditions.
  • It integrates multi-state Kalman filtering with dual-channel feature association and state cloning to efficiently handle low-texture, pure rotation, and dynamic multi-agent scenarios.
  • The approach delivers state-of-the-art performance with bounded computational cost, making it ideal for real-time UAV navigation and embedded robotic applications.

The Stereo Multi-State Constraint Kalman Filter (S-MSCKF) is an extension of the original MSCKF framework for visual-inertial odometry, designed to exploit the geometric constraints of stereo image pairs for real-time, high-accuracy motion estimation. Unlike monocular methods, S-MSCKF leverages the spatial baseline between two synchronized cameras (or, in collaborative scenarios, two spatially separated but temporally co-observing agents) to directly resolve scene scale and enhance robustness to motion degeneracies, low-texture scenes, and pure rotation. The approach achieves state-of-the-art accuracy and computational efficiency for unconstrained navigation of aerial vehicles and mobile robots, running comfortably on embedded CPU platforms (Sun et al., 2017, Wang et al., 2024).

1. Theoretical Foundations and State Representation

The S-MSCKF maintains a filter state that fuses inertial, kinematic, and extrinsic parameters:

  • IMU core state: Rotation as quaternion (qGIq_{GI}), position, velocity, and IMU biases (bg,bab_g, b_a).
  • Camera extrinsics: Fixed rotation and translation from IMU/body to (usually left) camera (qCB,tCBq_{CB}, t_{CB}).
  • Cloned camera poses: A sliding window of past camera poses (clones), supporting multi-epoch feature constraints.
  • Collaborative (variable-baseline): In multi-agent or non-rigid stereo, additional relative pose states (tBij,qBijt_{B_{ij}}, q_{B_{ij}}) track other platforms' positions and orientations.

In formal terms, the state is

xk=[xkIMU xij,krel xcam]x_k = \begin{bmatrix} x^{\mathrm{IMU}}_k \ x^{\mathrm{rel}}_{ij,k} \ x^{\mathrm{cam}} \end{bmatrix}

where each component is as specified above. State cloning and augmentation are performed on new (stereo) image arrivals (Wang et al., 2024, Sun et al., 2017).

Error-State Formulation

A minimal, right-invariant error parametrization is employed: quaternion parts are updated via the exponential map, and all other components additively. The full error vector aggregates IMU, exteroceptive, and extrinsic terms,

δx=[δθTδvTδbgTδbaTδtBijTδϕBijT(cam-extrinsic-error)T]T\delta x = \begin{bmatrix} \delta \theta^T & \delta v^T & \delta b_g^T & \delta b_a^T & \delta t_{B_{ij}}^T & \delta \phi_{B_{ij}}^T & (\text{cam-extrinsic-error})^T \end{bmatrix}^T

This maintains linearity in the filter update and facilitates efficient linearization for Kalman filtering (Sun et al., 2017).

2. IMU Propagation and State Augmentation

The continuous-time propagation equations incorporate IMU measurements: q˙GI=12qGI⊗(ωm−bg) v˙I=R(qGI)(am−ba)+g b˙g=nwg,b˙a=nwa\begin{aligned} \dot q_{G I} &= \frac{1}{2} q_{G I} \otimes (\omega_m - b_g) \ \dot v_{I} &= R(q_{G I})\bigl(a_m - b_a\bigr) + g \ \dot b_g &= n_{wg}, \qquad \dot b_a = n_{wa} \end{aligned} Discrete propagation applies zero-order hold integration between measurement intervals, with process noise appropriately injected in the linearized error-state space. Upon image arrival, a new camera clone is appended to the state with covariance augmentation preserving cross terms (Sun et al., 2017, Wang et al., 2024).

In multi-agent collaborative stereo, the relative pose states tBij,qBijt_{B_{ij}}, q_{B_{ij}} between partners are dynamically propagated using velocity increments supplied by visual-inertial odometry (VIO) from each agent, accommodating non-rigid and time-varying baselines (Wang et al., 2024).

3. Stereo Measurement Model and Null-space Update

For each 3D feature PP observed in both left and right cameras, the predicted measurement uses pinhole projection: z^L=π(K, qCLB(P−tCLB)),z^R=π(K, qCRB(P−tCRB))\hat z_L = \pi\bigl(K,\, q_{C_L B}(P - t_{C_L B})\bigr), \quad \hat z_R = \pi\bigl(K,\, q_{C_R B}(P - t_{C_R B})\bigr) Stacking stereo pixel coordinates for all observations in the sliding window forms the measurement vector,

zj=(ui,1j vi,1j ui,2j vi,2j)z^{j} = \begin{pmatrix} u_{i,1}^j \ v_{i,1}^j \ u_{i,2}^j \ v_{i,2}^j \end{pmatrix}

Linearization around the filter estimate yields the stacked residual for each feature: r=[zk−m ⋮ zk]−[h(xk−m,P) ⋮ h(xk,P)]≈Hxδx+HfδP+npr = \begin{bmatrix} z^{k-m} \ \vdots \ z^{k} \end{bmatrix} - \begin{bmatrix} h(x_{k-m}, P) \ \vdots \ h(x_{k}, P) \end{bmatrix} \approx H_x \delta x + H_f \delta P + n_p

To sidestep the direct update of landmark positions (δP\delta P), S-MSCKF projects the constraint into the left null-space of HfH_f: r~=QTr,H~x=QTHx\tilde r = Q^T r, \qquad \tilde H_x = Q^T H_x The Kalman update is then performed only on the reduced residual and the filter state: S=H~xPH~xT+QTRQ,K=PH~xTS−1,δx=Kr~S = \tilde H_x P \tilde H_x^T + Q^T R Q, \quad K = P \tilde H_x^T S^{-1}, \quad \delta x = K \tilde r Landmark states are never added to the filter, maintaining bounded computational cost (Sun et al., 2017, Wang et al., 2024).

4. Feature Association and Front-End Strategies

Real-time S-MSCKF performance relies on robust cross-frame and cross-camera feature association:

  • Guidance channel: High-quality SuperPoint + SuperGlue matching at low frame rates (∼\sim13 Hz).
  • Prediction channel: Lucas–Kanade optical flow on prior "matches club" propagated at full frame rate (30 Hz).
  • Outlier rejection: Sequential filtering through grid-based suppression, cross-consistency checking, RANSAC on the essential matrix, and directional-angle clustering.
  • Feature budget: Strict per-frame limit (e.g., 200 tracks) and grid-based selection prevent spatial and computational bottlenecks.

For collaborative stereo, keypoint and descriptor exchanges between platforms enable dynamic stereo pair formation, despite drifting or asynchronous baselines (Wang et al., 2024).

5. Computational Properties and Implementation

S-MSCKF is engineered for bounded, real-time execution through:

  • Fixed-lag estimation: Sliding window of NN camera pose clones (as small as N=7N=7 for embedded, up to N=15N=15 for desktop), with oldest/second-latest clones marginalized per keyframe strategy.
  • Efficient matrix operations: State and covariance update costs are cubic in window size and feature batch (O((21+6N)2(4mL−3L))O((21+6N)^2(4mL-3L))), but with hard limits on NN, LL for practical throughput.
  • CPU load: Typical end-to-end update rates are 20–40 Hz on systems from NVIDIA Xavier NX to Intel NUC6i7, with total CPU usage ≲\lesssim45% for full pipeline at 640×\times480×\times30Hz.

A plausible implication is that S-MSCKF is scalable to onboard UAV and multi-robot platforms operating with severe size, weight, and power constraints (Sun et al., 2017, Wang et al., 2024).

6. Empirical Performance and Robustness

On the EuRoC MAV and custom fast-flight datasets, S-MSCKF achieves RMS position errors on-par with state-of-the-art optimization-based and monocular solutions (OKVIS, VINS-MONO), with values <<0.1 m (Vicon), <<0.2 m (Machine Hall). In collaborative stereo with variable baselines, sub-decimeter relative pose estimation is consistently achieved at 30 Hz, with convergence to <<20 cm error within 5 s under challenging initialization, and resilience to ±10∘\pm 10^\circ initial yaw errors and pixel noise up to σ=8\sigma{=}8 px (Wang et al., 2024, Sun et al., 2017).

The ability to avoid expensive stereo rectification and instead operate natively in high-dimension (4D) measurement spaces enables robust operation under pure rotation, low texture, aggressive maneuvers, and intermittent feature tracks. Feature and pose marginalization maintain filter stability even under asynchronous communication and dynamic inter-agent configurations.

7. Impact and Applicability

S-MSCKF is a definitive approach for vision-aided inertial navigation in settings where real-time constraints, robustness, and dynamic, variable stereo geometry are paramount. Its open-source implementations and validated performance position it as a reference platform for research in UAV navigation, cooperative perception, and embedded SLAM. By integrating dual-channel feature association and a null-space Kalman update, S-MSCKF underpins recent advances in collaborative stereo (e.g., aerial urban obstacle mapping by multiple UAVs) and is a foundation for future scalable multi-agent vision systems (Wang et al., 2024, Sun et al., 2017).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Stereo Multi-State Constraint Kalman Filter (S-MSCKF).