Papers
Topics
Authors
Recent
Search
2000 character limit reached

Informed-RRT*: Efficient Optimal Planning

Updated 28 January 2026
  • Informed-RRT* is a sampling-based optimal path planning method that uses an admissible ellipsoidal heuristic to focus exploration on regions that can improve the current solution.
  • It preserves key properties such as probabilistic completeness and asymptotic optimality while dramatically speeding up convergence in high-dimensional spaces.
  • The algorithm applies direct uniform sampling in a prolate hyperspheroidal subset of the state space, significantly reducing redundant computations compared to global sampling.

The Informed-RRT* algorithm is a sampling-based optimal path planning technique that modifies the classic RRT* algorithm by using an admissible ellipsoidal heuristic to dramatically focus the sampling process after an initial solution is found. This focus yields rapid convergence to near-optimal solutions and robust scalability to higher state dimensions, while rigorously preserving probabilistic completeness and asymptotic optimality. The algorithm’s performance advantages rest on direct uniform sampling from a prolate hyperspheroidal subset of the state space, which contains exactly those states that could enable a better path than the current incumbent solution (Gammell et al., 2014).

1. Motivation and Theoretical Background

Traditional RRT (Rapidly-exploring Random Trees) planners are single-query methods that sample uniformly from the state space and are probabilistically complete but almost surely suboptimal. RRT* introduces a local rewiring step to achieve asymptotic optimality. However, standard RRT* continues to sample over the entire state space X\mathcal X, regardless of whether new samples could possibly lead to an improved path. This global sampling regime becomes especially inefficient in large or high-dimensional planning spaces because once a solution is found, the vast majority of new sampled states are irrelevant—they cannot connect the start and goal at lower cost than the incumbent. This ‘curse of dimensionality’ results in dramatic slowdowns in practice (Gammell et al., 2014, Gammell et al., 2017).

Informed-RRT* addresses this inefficiency by shifting, after the first solution is found, to sampling only within the subset of X\mathcal X through which a strictly better path could pass. For the case of minimizing Euclidean path length in Rn\mathbb R^n, this subset is the interior of a prolate hyperspheroid with its foci at xstartx_{\rm start} and xgoalx_{\rm goal}, and major axis equal to the cost-to-beat cbestc_{\rm best} (Gammell et al., 2014).

2. Mathematical Definition of the Informed Set

Suppose xstart,xgoalRnx_{\rm start}, x_{\rm goal} \in \mathbb R^n are fixed, and cbestxgoalxstartc_{\rm best} \geq \|x_{\rm goal} - x_{\rm start}\| denotes the cost of the best solution found so far. The subset of states that could possibly yield a path of cost less than cbestc_{\rm best} is

Xinf={xRn:xxstart+xxgoalcbest}\mathcal X_{\rm inf} = \left\{x \in \mathbb R^n : \|x - x_{\rm start}\| + \|x - x_{\rm goal}\| \leq c_{\rm best}\right\}

This set is precisely the interior of a prolate hyperspheroid with foci at xstartx_{\rm start} and xgoalx_{\rm goal}. Its principal semi-axes are r1=cbest/2r_1 = c_{\rm best}/2 (major axis) and r2,...,n=12cbest2d2r_{2,...,n} = \frac12 \sqrt{c_{\rm best}^2 - d^2}, where d=xgoalxstartd = \|x_{\rm goal} - x_{\rm start}\|. As cbestdc_{\rm best} \to d, the hyperspheroid collapses around the straight-line segment from start to goal (Gammell et al., 2014, Gammell et al., 2017, Gammell et al., 2017).

The volume of the prolate hyperspheroid in nn dimensions is

Vol(Xinf)=πn/2Γ(n2+1)(cbest2d2)(n1)/2cbest2n\mathrm{Vol}(\mathcal X_{\rm inf}) = \frac{\pi^{n/2}}{\Gamma(\frac n2+1)} \frac{(c_{\rm best}^2 - d^2)^{(n-1)/2} c_{\rm best}}{2^n}

where Γ()\Gamma(\cdot) is the gamma function (Gammell et al., 2017).

3. Direct Uniform Sampling in the Informed Set

Naively rejecting samples from a bounding box or full state space becomes exponentially inefficient as Vol(Xinf)/Vol(X)\mathrm{Vol}(\mathcal X_{\rm inf})/\mathrm{Vol}(\mathcal X) shrinks with smaller cbestc_{\rm best} and higher nn. Informed-RRT* therefore designs an exact, rejection-free routine for direct uniform sampling in Xinf\mathcal X_{\rm inf}:

  1. Sample a random direction vN(0,In)v \sim \mathcal N(0, I_n) and normalize: vv/vv \gets v/\|v\|.
  2. Sample a radius sU[0,1]1/ns \sim U[0, 1]^{1/n} and set r=svr = s v (uniform in the unit nn-ball).
  3. Scale and center: Compute the aligned scale matrix L=diag(r1,r2,...,rn)L = \mathrm{diag}(r_1, r_2, ..., r_n).
  4. Rotate: Find a rotation CSO(n)C \in SO(n) that aligns the ellipsoid’s main axis with xgoalxstartx_{\rm goal} - x_{\rm start} (using SVD-based minimal rotation).
  5. Affine transform: x=CLr+xcenterx = C L r + x_{\rm center} where xcenter=12(xstart+xgoal)x_{\rm center} = \frac12(x_{\rm start} + x_{\rm goal}).

This affine transformation preserves uniformity by measure-theoretic arguments. As a result, direct ellipsoid sampling maintains maximal density of relevant samples regardless of configuration space dimension (Gammell et al., 2014, Gammell et al., 2017).

4. Algorithmic Description and Integration with RRT*

The only substantive difference between classic RRT* and Informed-RRT* is in the sampling procedure. The rest of the algorithm—extension, collision checking, neighbor set, and rewiring—remains unchanged.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Algorithm InformedRRT*(x_start,x_goal)
    V ← {x_start}
    E ← ∅
    c_best ← ∞
    for iter = 1..N do
        if c_best < ∞ then
            x_rand ← SampleEllipsoid(x_start, x_goal, c_best)
        else
            x_rand ← Uniform(x in X_free)
        x_near ← Nearest(V, x_rand)
        x_new  ← Steer(x_near, x_rand)
        if CollisionFree(x_near, x_new) then
            V ← V ∪ {x_new}
            X_near ← Near(V, x_new, r_iter)
            x_min  ← x_near with minimal cost
            Rewire tree around x_new and X_near
            if x_new ∈ GoalRegion then
                c_new ← pathCost(x_new)
                if c_new < c_best then
                    c_best ← c_new
    return best path found
(Gammell et al., 2014)

Whenever cbestc_{\rm best} is finite, all newly generated samples are drawn from Xinf\mathcal X_{\rm inf}. This focuses all future exploration on the sole subset of X\mathcal X that can improve the solution, providing both empirical and theoretical speedup.

5. Theoretical Properties and Convergence Rates

  • Probabilistic Completeness: Informed-RRT* never discards any branch of the search tree, so as long as Xinf\mathcal X_{\rm inf} contains all possibly better solutions, all existing RRT* completeness guarantees remain valid (Gammell et al., 2014).
  • Asymptotic Optimality: Because sampling inside Xinf\mathcal X_{\rm inf} is uniform and the rewiring radius law remains satisfied (inherited from RRT*), the method remains asymptotically optimal: limNcbestc\lim_{N\to\infty} c_{\rm best} \to c^* a.s. (Gammell et al., 2014, Gammell et al., 2017).
  • Convergence Rate: In the obstacle-free case, the expected path cost at iteration ii obeys

E[cbest(i)]=ncbest(i1)+dn+1E[c_{\rm best}^{(i)}] = \frac{n\,c_{\rm best}^{(i-1)} + d}{n+1}

yielding linear convergence with rate μ=n1n+1\mu = \frac{n-1}{n+1}. This is asymptotically superior to the sublinear rate for global-uniform sampling, especially as nn increases (Gammell et al., 2014, Gammell et al., 2017).

  • Dimensional Independence: Direct sampling of the ellipsoid avoids the factorial degradation of performance observed with box-rejection or simple heuristic-based rejection, which becomes asymptotically ineffective in high dimensions (Gammell et al., 2017).

6. Empirical Performance and Variants

Experiments demonstrate that Informed-RRT* consistently finds the first feasible solution as rapidly as RRT*, but reduces final path costs orders of magnitude faster in high dimensions and wide planning domains. Table 1 summarizes selected empirical results (Gammell et al., 2014, Maseko et al., 2021).

Scenario RRT* Time/Cost Informed RRT* Time/Cost Speedup / Improvements
2D Maze (“maze world”) 129.3 @ 133 s 129.3 @ 47 s ~3× faster
Narrow Gap 12.32 s 4.00 s ~3× faster
Width scaling (map width) 0.6–2.2 s ~0.6 s (independent of ll) Range-invariant
Reach 1% of optimal 5 s 1.7 s ~3× faster
High-dim (n=2/6/8n=2/6/8 after 60s) 45%, 37%, 32% cost improvement Substantial with nn\uparrow

Further, practical enhancements such as adding post-solution path-optimizers (random shortcut, gradient-based, or wrapping procedures) further improve both path quality and computational efficiency, with negligible impact on overall complexity (Maseko et al., 2021). Informs-RRT* forms the base of state-of-the-art planners such as the OMPL “InformedRRTstar” implementation.

7. Extensions and Generalizations

While direct uniform sampling in the prolate hyperspheroid is analytically feasible for minimum Euclidean-cost problems in linear state spaces, generalizations to kinodynamic planning—where state spaces are not Euclidean and cost functions are non-additive—require alternative strategies.

  • MCMC-based Sampling in the Informed Set: In non-Euclidean or kinodynamic settings, the informed set is the sub-level set {xf(x)c}\{x \mid f(x) \leq c^*\} for some non-convex function ff. Markov Chain Monte Carlo (MCMC) methods such as Metropolis-Hastings or Hit-and-Run efficiently generate uniform samples in XinfX_{\rm inf}, maintaining RRT*'s asymptotic optimality (Yi et al., 2017).
  • Adaptive Hybrid Local-Global Sampling: Admissible mixtures of global informed sampling and local neighborhood “tube” sampling, adaptively weighted by a reward schedule based on improvement rate, have been shown to further reduce convergence time for practical systems and have formal asymptotic optimality proofs (Faroni et al., 2022).
  • Learning-Augmented Variants: Neural Informed RRT* (NIRRT*) integrates the admissible ellipsoidal focus with point-cloud-based guidance from deep neural networks (e.g., PointNet++), maintaining probabilistic completeness and optimality, improving convergence in cluttered environments, and enabling efficient real-world mobile robot navigation (Huang et al., 2023).

References

  • Gammell, J. D., Barfoot, T. D., Srinivasa, S. S., "Informed RRT*: Optimal Sampling-based Path Planning Focused via Direct Sampling of an Admissible Ellipsoidal Heuristic" (Gammell et al., 2014).
  • Gammell et al., "Informed Sampling for Asymptotically Optimal Path Planning" (Gammell et al., 2017).
  • Yi et al., "Generalizing Informed Sampling for Asymptotically Optimal Sampling-based Kinodynamic Planning via Markov Chain Monte Carlo" (Yi et al., 2017).
  • Faroni et al., "Adaptive Hybrid Local-Global Sampling for Fast Informed Sampling-Based Optimal Path Planning" (Faroni et al., 2022).
  • Huang et al., "Neural Informed RRT*: Learning-based Path Planning with Point Cloud State Representations under Admissible Ellipsoidal Constraints" (Huang et al., 2023).
  • Lebedev et al., "Optimised Informed RRTs for Mobile Robot Path Planning" (Maseko et al., 2021).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Informed-RRT* Algorithm.