Papers
Topics
Authors
Recent
2000 character limit reached

Grounded and Lifted Planning

Updated 23 November 2025
  • Grounded and lifted planning are foundational paradigms where grounded planning uses full propositional instantiation and lifted planning employs parameterized schemas for abstraction.
  • Grounded planning can lead to combinatorial explosion, while lifted planning dynamically instantiates actions to enhance scalability in complex domains.
  • Both approaches are vital in AI planning, with lifted methods offering significant advantages in learning efficiency, heuristic generalization, and computational performance.

Grounded and lifted planning are two foundational paradigms in automated planning and decision-making. Grounded planning operates on fully instantiated, propositional representations, while lifted planning manipulates parameterized action schemas and predicates, allowing symbolic reasoning and abstraction across variable domain sizes. The distinction between these approaches is central to scalability, representational efficiency, generalization, learning, and computational complexity in both classical and stochastic planning.

1. Formal Definitions and Representational Distinctions

Grounded planning, also known as propositional or classical planning, encodes a planning task as a tuple P,A,s0,G\langle P,\,A,\,s_0,\,G\rangle, where PP is a finite set of ground propositional fluents, AA is a set of ground actions (each a tuple of propositional preconditions and effects), s0s_0 is the initial state, and GG is the goal formula. Every predicate and action in the domain is instantiated over all possible object tuples, resulting in an explicit state and action space that grows combinatorially with domain size and parameter arities (Juba et al., 2021, Khardon et al., 2017, Mantenoglou et al., 13 Nov 2025).

By contrast, lifted planning is defined over typed predicates, objects, and parameterized action schemas: Πlifted=T,F,A,O,s0,G,\Pi_{\mathrm{lifted}} = \langle T, \mathcal{F}, \mathcal{A}, O, s_0, G \rangle, with TT a set of types, F\mathcal{F} a set of lifted predicates, A\mathcal{A} the set of action schemas, OO the objects, s0s_0 and GG as before. Action schemas remain as first-order templates until instantiated as needed, with explicit substitutions only performed during search or inference (Juba et al., 2021, Khardon et al., 2017, Soutchanski et al., 2023, Mantenoglou et al., 13 Nov 2025).

Grounded planning is thus a special case of lifted planning where all variables have been substituted by concrete domain objects.

2. Computational and Algorithmic Consequences

The primary computational implications are in scalability and memory complexity:

Grounded Planning Lifted Planning
State Representation Each state is a set of fully instantiated atoms; state space size 2atoms2^{|\mathrm{atoms}|} Symbolic, first-order; typically parameterized by logical variables and quantifiers (Khardon et al., 2017)
Action Enumeration Expands all parameterized actions over domain objects: O(Ok)O(|O|^k) per kk-ary action Instantiates actions as needed; representation size independent of O|O|
Search Performance Intractable for moderate domain sizes due to exponential blow-up Can scale to very large or open domains if symmetries are fully exploited

Lifted planners instantiate actions dynamically, often with search and heuristic evaluation performed directly on symbolic schemas (Soutchanski et al., 2023, Chen et al., 2023), enabling substantial savings in situations where object counts or parameter arities are large.

3. Learning and Abstraction in Lifted and Grounded Settings

Learning planning models and heuristics can be approached from both grounded and lifted perspectives. In grounded learning, symbolic models or heuristics are constructed from fully instantiated state-action traces, which may cause sample and computational complexity to scale poorly with domain size (Juba et al., 2021, Chen et al., 2023).

In lifted learning, action schemas, constraints, or value functions are parameterized, enabling abstraction over underlying objects and improved data efficiency. For example, safe model-free planners can learn conservative lifted action models from observed trajectories with sample complexity dependent only on the number of parameter-bound fluents (not object count), achieving soundness guarantees in out-of-sample planning (Juba et al., 2021). Similarly, lifted architectures for learning domain-independent heuristics (e.g., lifted graph neural networks) exhibit generalization to much larger domains than were seen during training, precisely because their representations exploit the invariance properties of lifted planning (Chen et al., 2023).

4. Advanced Planning Frameworks: Stochastic and Constrained Domains

Grounded and lifted planning paradigms extend to stochastic settings and to constrained PDDL fragments. In lifted symbolic dynamic programming for stochastic planning, value iteration and policy synthesis are performed over compact relational representations, aggregating over quantified variables and exploiting domain symmetries for object-parameter maximization and case-based partitioning (Khardon et al., 2017). The complexity of these algorithms is determined by the size of relational expressions rather than explicit enumeration.

In planning with qualitative state-trajectory constraints (e.g., PDDL3), lifted compilation methods (such as LiftedTCORE and LCC) compile away trajectory constraints without grounding, which is essential for tasks with high-arity actions and large object universes (Mantenoglou et al., 13 Nov 2025). Lifted regression and constraint injection in these compilers ensure correctness while keeping the compiled task's size orders of magnitude below that required by grounded approaches.

5. Empirical and Algorithmic Comparisons

Empirical evaluations consistently demonstrate that lifted approaches are necessary to scale to domains characterized by many objects or high-arity predicates/actions:

  • In safe lifted model-free planning, the number of demonstrations needed for sound model recovery is constant (at most two) across widely varying domain sizes, versus an order of magnitude higher for grounded learners (Juba et al., 2021).
  • Theorem-proving lifted planners expand an order of magnitude fewer situations and produce shorter plans in most IPC benchmarks compared to grounded planners, particularly when using domain-independent lifted heuristics such as delete-relaxation (Soutchanski et al., 2023).
  • In constraint compilation for large domains (e.g., BLOCKSWORLD with towers), lifted compilers yield task sizes that are linear in the number of lifted actions and constraints, while grounded compilers suffer combinatorial blow-up (Mantenoglou et al., 13 Nov 2025).
  • In learning heuristics, lifted GNN-based heuristics generalize robustly to unseen domain sizes, while grounded versions are restricted by the size and sparseness of the input graphs (Chen et al., 2023).

6. Limitations and Ongoing Challenges

While lifted planning provides powerful abstraction and generalization capabilities, it is subject to several technical challenges:

  • Not all constraints are efficiently expressible in lifted form, especially those with deep quantification or non-local trajectory conditions (Mantenoglou et al., 13 Nov 2025).
  • Symbolic dynamic programming in lifted domains can become intractable or only approximately lifted when sum-aggregates, exogenous events, or hybrid continuous features are present (Khardon et al., 2017).
  • Certain non-Markovian or non-liftable reward structures require specialized representations, and current lifted inference algorithms may not be fully general for alternation-style queries (e.g., alternating max\max and \sum blocks in planning as inference)(Khardon et al., 2017).
  • Learning lifted action schemas from raw perception (images) often requires intermediate representations (e.g., spatial predicates from a vision system), and generalization still depends on the quality of the symbolic abstraction (Liberman et al., 2022).

Open directions include hybridizing grounded and lifted approaches to target only relevant groundings in high-complexity domains, optimizing lifted heuristics for efficiency, and extending symbolic representations to partially observable, exogenous, or hybrid domains.

7. Synthesis and Research Impact

The ground–lifted dichotomy remains central to the design, analysis, and deployment of automated planners and learning frameworks. Lifted methods are indispensable for scalability and enabling generalization across domain sizes and structural variations. Advances in lifted reasoning, lifted compilation, lifted learning, and generalized inference architectures collectively drive progress in AI planning, as evidenced by recent benchmarks and competition results (Juba et al., 2021, Mantenoglou et al., 13 Nov 2025, Khardon et al., 2017, Soutchanski et al., 2023, Singh et al., 2023, Chen et al., 2023, Liberman et al., 2022).

Continued research focuses on extending the expressive power of lifted planning without compromising computational efficiency, as well as bridging symbolic and statistical paradigms for safe, robust, and interpretable AI planning systems.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Grounded and Lifted Planning.