Informed-RRT*: Efficient Optimal Planning
- Informed-RRT* is a sampling-based optimal path planning method that uses an admissible ellipsoidal heuristic to focus exploration on regions that can improve the current solution.
- It preserves key properties such as probabilistic completeness and asymptotic optimality while dramatically speeding up convergence in high-dimensional spaces.
- The algorithm applies direct uniform sampling in a prolate hyperspheroidal subset of the state space, significantly reducing redundant computations compared to global sampling.
The Informed-RRT* algorithm is a sampling-based optimal path planning technique that modifies the classic RRT* algorithm by using an admissible ellipsoidal heuristic to dramatically focus the sampling process after an initial solution is found. This focus yields rapid convergence to near-optimal solutions and robust scalability to higher state dimensions, while rigorously preserving probabilistic completeness and asymptotic optimality. The algorithm’s performance advantages rest on direct uniform sampling from a prolate hyperspheroidal subset of the state space, which contains exactly those states that could enable a better path than the current incumbent solution (Gammell et al., 2014).
1. Motivation and Theoretical Background
Traditional RRT (Rapidly-exploring Random Trees) planners are single-query methods that sample uniformly from the state space and are probabilistically complete but almost surely suboptimal. RRT* introduces a local rewiring step to achieve asymptotic optimality. However, standard RRT* continues to sample over the entire state space , regardless of whether new samples could possibly lead to an improved path. This global sampling regime becomes especially inefficient in large or high-dimensional planning spaces because once a solution is found, the vast majority of new sampled states are irrelevant—they cannot connect the start and goal at lower cost than the incumbent. This ‘curse of dimensionality’ results in dramatic slowdowns in practice (Gammell et al., 2014, Gammell et al., 2017).
Informed-RRT* addresses this inefficiency by shifting, after the first solution is found, to sampling only within the subset of through which a strictly better path could pass. For the case of minimizing Euclidean path length in , this subset is the interior of a prolate hyperspheroid with its foci at and , and major axis equal to the cost-to-beat (Gammell et al., 2014).
2. Mathematical Definition of the Informed Set
Suppose are fixed, and denotes the cost of the best solution found so far. The subset of states that could possibly yield a path of cost less than is
This set is precisely the interior of a prolate hyperspheroid with foci at and . Its principal semi-axes are (major axis) and , where . As , the hyperspheroid collapses around the straight-line segment from start to goal (Gammell et al., 2014, Gammell et al., 2017, Gammell et al., 2017).
The volume of the prolate hyperspheroid in dimensions is
where is the gamma function (Gammell et al., 2017).
3. Direct Uniform Sampling in the Informed Set
Naively rejecting samples from a bounding box or full state space becomes exponentially inefficient as shrinks with smaller and higher . Informed-RRT* therefore designs an exact, rejection-free routine for direct uniform sampling in :
- Sample a random direction and normalize: .
- Sample a radius and set (uniform in the unit -ball).
- Scale and center: Compute the aligned scale matrix .
- Rotate: Find a rotation that aligns the ellipsoid’s main axis with (using SVD-based minimal rotation).
- Affine transform: where .
This affine transformation preserves uniformity by measure-theoretic arguments. As a result, direct ellipsoid sampling maintains maximal density of relevant samples regardless of configuration space dimension (Gammell et al., 2014, Gammell et al., 2017).
4. Algorithmic Description and Integration with RRT*
The only substantive difference between classic RRT* and Informed-RRT* is in the sampling procedure. The rest of the algorithm—extension, collision checking, neighbor set, and rewiring—remains unchanged.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
Algorithm InformedRRT*(x_start,x_goal)
V ← {x_start}
E ← ∅
c_best ← ∞
for iter = 1..N do
if c_best < ∞ then
x_rand ← SampleEllipsoid(x_start, x_goal, c_best)
else
x_rand ← Uniform(x in X_free)
x_near ← Nearest(V, x_rand)
x_new ← Steer(x_near, x_rand)
if CollisionFree(x_near, x_new) then
V ← V ∪ {x_new}
X_near ← Near(V, x_new, r_iter)
x_min ← x_near with minimal cost
Rewire tree around x_new and X_near
if x_new ∈ GoalRegion then
c_new ← pathCost(x_new)
if c_new < c_best then
c_best ← c_new
return best path found |
Whenever is finite, all newly generated samples are drawn from . This focuses all future exploration on the sole subset of that can improve the solution, providing both empirical and theoretical speedup.
5. Theoretical Properties and Convergence Rates
- Probabilistic Completeness: Informed-RRT* never discards any branch of the search tree, so as long as contains all possibly better solutions, all existing RRT* completeness guarantees remain valid (Gammell et al., 2014).
- Asymptotic Optimality: Because sampling inside is uniform and the rewiring radius law remains satisfied (inherited from RRT*), the method remains asymptotically optimal: a.s. (Gammell et al., 2014, Gammell et al., 2017).
- Convergence Rate: In the obstacle-free case, the expected path cost at iteration obeys
yielding linear convergence with rate . This is asymptotically superior to the sublinear rate for global-uniform sampling, especially as increases (Gammell et al., 2014, Gammell et al., 2017).
- Dimensional Independence: Direct sampling of the ellipsoid avoids the factorial degradation of performance observed with box-rejection or simple heuristic-based rejection, which becomes asymptotically ineffective in high dimensions (Gammell et al., 2017).
6. Empirical Performance and Variants
Experiments demonstrate that Informed-RRT* consistently finds the first feasible solution as rapidly as RRT*, but reduces final path costs orders of magnitude faster in high dimensions and wide planning domains. Table 1 summarizes selected empirical results (Gammell et al., 2014, Maseko et al., 2021).
| Scenario | RRT* Time/Cost | Informed RRT* Time/Cost | Speedup / Improvements |
|---|---|---|---|
| 2D Maze (“maze world”) | 129.3 @ 133 s | 129.3 @ 47 s | ~3× faster |
| Narrow Gap | 12.32 s | 4.00 s | ~3× faster |
| Width scaling (map width) | 0.6–2.2 s | ~0.6 s (independent of ) | Range-invariant |
| Reach 1% of optimal | 5 s | 1.7 s | ~3× faster |
| High-dim ( after 60s) | — | 45%, 37%, 32% cost improvement | Substantial with |
Further, practical enhancements such as adding post-solution path-optimizers (random shortcut, gradient-based, or wrapping procedures) further improve both path quality and computational efficiency, with negligible impact on overall complexity (Maseko et al., 2021). Informs-RRT* forms the base of state-of-the-art planners such as the OMPL “InformedRRTstar” implementation.
7. Extensions and Generalizations
While direct uniform sampling in the prolate hyperspheroid is analytically feasible for minimum Euclidean-cost problems in linear state spaces, generalizations to kinodynamic planning—where state spaces are not Euclidean and cost functions are non-additive—require alternative strategies.
- MCMC-based Sampling in the Informed Set: In non-Euclidean or kinodynamic settings, the informed set is the sub-level set for some non-convex function . Markov Chain Monte Carlo (MCMC) methods such as Metropolis-Hastings or Hit-and-Run efficiently generate uniform samples in , maintaining RRT*'s asymptotic optimality (Yi et al., 2017).
- Adaptive Hybrid Local-Global Sampling: Admissible mixtures of global informed sampling and local neighborhood “tube” sampling, adaptively weighted by a reward schedule based on improvement rate, have been shown to further reduce convergence time for practical systems and have formal asymptotic optimality proofs (Faroni et al., 2022).
- Learning-Augmented Variants: Neural Informed RRT* (NIRRT*) integrates the admissible ellipsoidal focus with point-cloud-based guidance from deep neural networks (e.g., PointNet++), maintaining probabilistic completeness and optimality, improving convergence in cluttered environments, and enabling efficient real-world mobile robot navigation (Huang et al., 2023).
References
- Gammell, J. D., Barfoot, T. D., Srinivasa, S. S., "Informed RRT*: Optimal Sampling-based Path Planning Focused via Direct Sampling of an Admissible Ellipsoidal Heuristic" (Gammell et al., 2014).
- Gammell et al., "Informed Sampling for Asymptotically Optimal Path Planning" (Gammell et al., 2017).
- Yi et al., "Generalizing Informed Sampling for Asymptotically Optimal Sampling-based Kinodynamic Planning via Markov Chain Monte Carlo" (Yi et al., 2017).
- Faroni et al., "Adaptive Hybrid Local-Global Sampling for Fast Informed Sampling-Based Optimal Path Planning" (Faroni et al., 2022).
- Huang et al., "Neural Informed RRT*: Learning-based Path Planning with Point Cloud State Representations under Admissible Ellipsoidal Constraints" (Huang et al., 2023).
- Lebedev et al., "Optimised Informed RRTs for Mobile Robot Path Planning" (Maseko et al., 2021).