Factor Graph Optimization (FGO)
- Factor Graph Optimization (FGO) is a probabilistic framework that models estimation problems as graphs for efficient nonlinear optimization.
- FGO is foundational in robotics, navigation, and SLAM, enabling robust and accurate state estimation in complex environments.
- Unlike EKF, FGO can exploit historical information and iterative optimization to significantly improve accuracy and resilience in challenging conditions.
Factor Graph Optimization (FGO) is a probabilistic inference and nonlinear optimization framework used to model and solve high-dimensional estimation, planning, and control problems by representing them as graphs composed of variable and factor nodes. Each variable node encodes a latent system state (e.g., position, velocity, attitude) and each factor encodes a probabilistic constraint or cost arising from sensor measurements, physical models, or system objectives. Factor graphs enable modular, extensible, and computationally efficient representations of structured estimation problems, and have become foundational in disciplines such as navigation, robotics, SLAM, and control.
1. Principles of Factor Graph Optimization
FGO represents the system’s joint probability (or maximum a posteriori, MAP) estimate as a product of “factor” potentials, each corresponding to a residue between the predicted and observed (or desired) quantities under an explicit sensor or dynamic model: where is a subset of states connected by factor , models the predicted measurement or constraint, is the observation or target, and the covariance/weight. In the context of GNSS/INS integration, factors may model GNSS pseudoranges, pseudorange rates, inertial propagation, or motion constraints (Wen et al., 2020).
The key property of FGO is its ability to exploit sparse structure: each factor depends on a small subset of variables, which allows efficient computation using modern sparse nonlinear optimizers (e.g., Gauss-Newton, Levenberg-Marquardt, or incremental solvers such as iSAM2).
2. FGO versus Extended Kalman Filtering (EKF)
FGO fundamentally differs from recursive filtering approaches such as the EKF in several aspects:
- Historical Information Usage: FGO can jointly optimize over a batch or window of past and current states, capturing time-correlation and breaking the first-order Markov assumption. This results in increased resilience to outlier contamination (e.g., GNSS multipath/NLOS) and produces smoother, more accurate state trajectories. In urban canyon experiments, tightly coupled FGO achieves a mean 2D error of 3.64 m, compared to 8.03 m for TC EKF (a 54.7% reduction) (Wen et al., 2020).
- Iterative Nonlinear Optimization: Unlike EKF, which linearizes and updates state only once per epoch, FGO leverages repeated re-linearization within a batch, enhancing nonlinearity handling for complex measurement models.
- Robustness to Outliers and Non-Gaussian Noise: With its batch structure and flexible error models, FGO can incorporate robust loss functions or measurement weighting schemes to mitigate non-Gaussian measurement errors more effectively than the Kalman filter framework.
- Sliding Window Adaptation: FGO can be implemented in a sliding-window fashion to trade off real-time performance and optimality, with window size impacting the balance between historical exploitation and adaptability to environmental changes. Empirically, a 30 s window yielded near-optimal performance in urban GNSS/INS (Wen et al., 2020).
3. The Role of Window Size and Time Correlation
In practical FGO systems, optimization is often performed over a finite window of length (in seconds or epochs). This windowed batch comprises recent states and all associated measurement factors. The choice of window size significantly affects performance:
- Small Windows (): FGO reverts to EKF-like performance, yielding incremental improvements due to iterative optimization but limited exploitation of prior information. For example, a window of 1 s still yielded 5.18 m error (vs. 8.03 m EKF), attributed to the relinearization benefit rather than temporal correlation (Wen et al., 2020).
- Moderate Windows (e.g., 30 s): Substantially reduce error (to 3.74 m), as they allow the optimizer to exploit temporal redundancy and better distinguish persistent outliers, particularly effective in environments with long-tailed error distributions.
- Very Large Windows: Can suffer if measurement noise statistics change rapidly, as outdated historical data may introduce non-representative information and hurt accuracy at certain epochs (shown experimentally).
- Optimal Windowing: Context- and environment-dependent; must balance maximizing time-correlation use against the risk of model mismatch due to non-stationary error distributions.
4. Robustness in Challenging Measurement Environments
FGO demonstrates pronounced advantages when GNSS measurement errors deviate from simple Gaussian assumptions, as is common in urban canyons where multipath and NLOS effects induce long-tailed, multi-modal error distributions (Wen et al., 2020). Histogram and GMM (Gaussian Mixture Model) analyses confirm pronounced tails in the error distribution and motivate windowing strategies that are responsive to the current measurement environment. The FGO structure:
- Incorporates Redundant, Temporally Correlated Constraints: Time-linked factors (e.g., inertial propagation) enforce motion consistency, allowing the optimizer to downweight or reject transient outliers.
- Yields Smoother and Smaller Residuals: Analysis shows that FGO-based solutions not only have smaller 2D error but also exhibit smoother measurement residuals across time, which reflects better state consistency and outlier rejection.
- Empirically Captures Error Structure: Window sizes on the order of 30 s adequately capture the current error distribution, maximizing estimator robustness (see, e.g., Figures 10–12 and corresponding GMM fits).
5. Degeneration and Comparative Analysis: “EKF-like FGO”
A key experiment in (Wen et al., 2020) involves reducing the FGO window to a single epoch ("EKF-like" estimator) to isolate the benefit of iterative non-linear optimization apart from history utilization:
- EKF Mean Error (TC): 8.03 m
- "EKF-like" FGO (window 1): 5.18 m
This delta illustrates that multiple iterations and improved nonlinearity handling—even absent historical information—are important advantages. The full benefit of FGO, however, requires moderate window sizes, where temporal correlation can suppress outlier influence.
6. Mathematical Formulation and Core Equations
The general MAP objective for FGO-based GNSS/INS fusion: where represents system, dynamics, or sensor models. In tightly-coupled GNSS/INS, the GNSS pseudorange model is: The mean residual over N measurements:
Adjusting the optimization window index sets the trade-off between historical constraint strength and current adaptability.
7. Concluding Insights and Research Implications
Empirical evidence demonstrates that FGO is markedly superior to EKF in GNSS/INS integration, especially for tightly-coupled architectures and in GNSS-challenged environments. Window size tuning is essential for maintaining estimator robustness in dynamic contexts. FGO’s capacity for iterative nonlinear optimization, exploitation of time-correlated information, and enhanced outlier resilience underpins its improved accuracy. As research progresses, adaptive window management and advanced noise modeling (e.g., GMMs) are expected to further enhance the practical utility of FGO in real-world urban navigation and sensor fusion applications (Wen et al., 2020).