Papers
Topics
Authors
Recent
Search
2000 character limit reached

Hierarchical Task Network Planning

Updated 27 January 2026
  • HTN planning is a hierarchical AI approach that decomposes complex tasks into simpler subtasks using domain-specific methods and primitive actions.
  • It integrates symbolic planning with LLM-assisted learning, reducing LLM query frequency by up to 75% while ensuring plan soundness.
  • Applications span robotics, logistics, and multi-agent systems, with verifier tasks bolstering reliability and robust plan execution.

Hierarchical Task Network (HTN) Planning Framework

Hierarchical Task Network (HTN) planning is a paradigm within artificial intelligence planning that models complex tasks and goal achievement via explicit, hierarchical decomposition. HTN frameworks rely on a library of domain-specific methods that encode admissible reductions of abstract (compound) tasks to simpler subtasks, terminating in executable primitive actions. They underpin a broad array of fielded planning systems and have served as the foundation for classical planners such as SHOP2, robotic platforms, multi-agent systems, and recent LLM-integrated symbolic planners.

1. Formal Model and Core Semantics

An HTN planning instance is formally specified by the tuple

⟨T,M,O,s0,N0⟩\langle T, M, O, s_0, N_0 \rangle

where:

  • TT is the finite set of task symbols, partitioned into primitive (TpT_p) and compound (TcT_c) tasks.
  • MM is a set of hierarchical methods. Each m∈Mm \in M is a triple (tm,prem,stm)(t_m, pre_m, st_m): tm∈Tct_m \in T_c is the compound task to decompose, prempre_m is a set of (possibly negative) first-order preconditions, and stmst_m is a (possibly ordered) list of subtasks drawn from TT.
  • OO is the set of primitive operators, each o=(to,preo,addo,delo)o = (t_o, pre_o, add_o, del_o) associating a primitive task with its preconditions and STRIPS add/delete-lists.
  • s0s_0 is the initial ground state (finite set of atoms).
  • N0N_0 is the initial task network (commonly an ordered list of tasks to achieve).

A solution is a sequence of primitive tasks π=[a1,…,ak]\pi = [a_1, \ldots, a_k] such that, starting from (s0,N0)(s_0, N_0), tasks are processed as follows:

  • If the next task tt in NN is compound, choose a method mm such that tmθ=tt_m\theta = t and premθ⊆spre_m\theta \subseteq s (for some grounding θ\theta), then replace tt by stmθst_m\theta in NN.
  • If tt is primitive, choose an operator oo with toθ=tt_o\theta = t, preoθ⊆spre_o\theta \subseteq s, and apply oo to transform ss (i.e., s′=(s∖deloθ)∪addoθs' = (s \setminus del_o\theta) \cup add_o\theta). Remove tt from NN. Iterate until NN is empty, yielding a primitive plan Ï€\pi transforming s0s_0 to s∗s^*.

HTN semantics strictly distinguishes between the domain control knowledge encoded in hierarchical methods and the search process, resulting in solution plans closely following human-specified task reduction protocols (Xu et al., 17 Nov 2025).

2. Structure and Representation of Methods

In HTN frameworks such as SHOP2 and ChatHTN, methods are parameterized schemas of the form:

  • Name: Identifier and parameter list
  • Preconditions: Set of first-order predicates
  • Subtask List: Ordered (or partially ordered) sequence of tasks, which may be primitive or compound

Example (extracted from an LLM-integrated HTN domain):

name:    rescueSurvivorM2(?drone,?survivor,?loc,?SH) Pre: {isDrone(?drone), safeHaven(?SH), atSurvivor(?survivor,?loc), atDrone(?drone,?loc)} Subtasks:[ pickUpSurvivor(?drone,?survivor,?loc), fly(?drone,?loc,?SH), unload(?drone,?survivor,?SH) ]\begin{aligned} & \text{name:} \;\; \texttt{rescueSurvivorM}_2(\texttt{?drone}, \texttt{?survivor}, \texttt{?loc}, \texttt{?SH}) \ & \text{Pre: } \{ isDrone(\texttt{?drone}),\, safeHaven(\texttt{?SH}),\, atSurvivor(\texttt{?survivor},\texttt{?loc}),\, atDrone(\texttt{?drone},\texttt{?loc}) \} \ & \text{Subtasks:} \left[\,\texttt{pickUpSurvivor}(\texttt{?drone},\texttt{?survivor},\texttt{?loc}),\,\texttt{fly}(\texttt{?drone},\texttt{?loc},\texttt{?SH}),\,\texttt{unload}(\texttt{?drone},\texttt{?survivor},\texttt{?SH})\,\right] \end{aligned}

Each HTN method thereby encodes admissible sub-plans for reducing domain-specific compound tasks, with method selection governed by their precondition satisfaction in the current planning state (Xu et al., 17 Nov 2025).

3. Planner Architectures and Algorithmic Execution

State-of-the-art HTN planners implement variants of depth-first, forward search driven by interleaved decomposition and execution. The core planner:

  • Repeatedly selects the next task in the current task list/network.
  • Applies all applicable methods when the task is compound; if none are relevant, may invoke a fallback mechanism (e.g., an LLM query as in ChatHTN).
  • For a primitive task, applies the corresponding operator if preconditions hold, updates the state, and continues.

System architecture, as exemplified by ChatHTN, comprises:

  • Symbolic Planner Core: SHOP-style left-to-right, depth-first HTN planner.
  • LLM Interface (ChatHTN/ChatGPT): Invoked when no method matches a compound task; returns a grounded sequence of primitive tasks as a decomposition.
  • Verifier/Method Learner: For each LLM response, inserts an explicit verifier task to ensure decomposition achieves the desired effect; for learning, regresses effects to minimal preconditions and generalizes the observed primitive decomposition into a reusable method by lifting constants to variables.
  • Memoization/Cache: Stores learned methods within the current planning episode to amortize LLM query frequency (Xu et al., 17 Nov 2025, Munoz-Avila et al., 17 May 2025).

Pseudocode for the integrated planner is given in (Munoz-Avila et al., 17 May 2025), reflecting both symbolic and LLM-based fallback decomposition, with a rigorous soundness guarantee enforced by the insertion of effect-checking verifier tasks.

4. Online Learning of HTN Methods

A critical advancement in recent HTN frameworks is the capacity to learn new methods online, particularly in the context of LLM-augmented planning:

  • Triggering Condition: During planning, when no method exists for the next compound task, issue an LLM query, supplying the current task, state, and operator library.
  • Processing LLM Output: Receive a grounded primitive decomposition, verify applicability and effect via simulation/regression.
  • Generalization: Perform regression along the operator sequence to determine minimal preconditions, then lift the method by replacing ground constants with variables.
  • Installation: Insert the new generalized method into the current method repository; optionally cache in a memoization structure (Xu et al., 17 Nov 2025).

Empirically, this online learning mechanism:

  • Reduces the number of costly LLM calls (by 50–75% on tested benchmarks).
  • Maintains or improves the percentage of problems solved, owing to the soundness of verified, learned methods.
  • Risk mitigation: Each additional LLM query introduces the potential for incorrect decompositions, but learned methods verified via effect reflux provide soundness (Xu et al., 17 Nov 2025).

5. Empirical Evaluation and Domain Applications

Experimental setups in LLM-integrated HTN frameworks utilize:

  • Domains: Logistics transportation (multi-agent, multi-package with mixed truck/plane delivery) and search-and-rescue (drone with compound rescue/scan tasks).
  • Metrics:
    • Number of LLM (GPT) calls per problem instance (averaged over multiple random seeds).
    • Percentage of test problems solved, compared across baselines (symbolic-only, LLM-only, and integrated learning).

Findings:

  • Learning procedures achieve sharp reductions in average GPT call rates, often by half or more.
  • Problem-solving rates are preserved or increased by learned, sound methods, with improved robustness when large monolithic decompositions are avoided.
  • Removal of key top-level methods reverts performance to suboptimal one-shot LLM solutions, underlining the importance of etiology in method libraries (Xu et al., 17 Nov 2025).

6. Limitations and Open Challenges

Current HTN frameworks, including integrated learning systems, face the following challenges:

  • Expressivity of Learned Methods: Only linear, totally-ordered sequences of primitives can be induced online. Recursive, partially ordered, or highly generalized decompositions (e.g., generic loops or recurrences) are not discoverable via current single-instance decompositions.
  • Dependency on LLM Quality: The correctness of learned methods is bounded by the accuracy of the primitive decompositions synthesized by the LLM; systematic errors at the decomposition level cannot be corrected by the verifier.
  • Scalability of Top-Level Task Learning: Large, monolithic LLM calls for entire problem decompositions are prone to failure; future systems aim to support the learning and reuse of complex, mixed compound/primitive decompositions.
  • Integration with Offline Knowledge Acquisition: Full benefit is likely to come from hybrid approaches, combining learned methods with document-driven or expert-supplied models and incremental, online refinement (Xu et al., 17 Nov 2025, Munoz-Avila et al., 17 May 2025).

7. Comparative Perspective and Future Directions

The integration of HTN planning with LLMs (as in ChatHTN and its extensions) represents a significant advance in alleviating the domain knowledge engineering bottleneck by enabling dynamic acquisition of hierarchical control knowledge. The formalism sustains classical guarantees of soundness: explicit verifier tasks ensure every proposed decomposition satisfies its declared effects. However, completeness and plan optimality are inherited from the underlying symbolic planner—neither is guaranteed in the presence of incomplete or erroneous method libraries.

Open research avenues include:

  • Learning recursive and partially ordered methods from multiple instances or by allowing LLMs to propose structured, mixed-type decompositions.
  • Enhancing verification pipelines to filter structurally plausible but semantically erroneous learned methods.
  • Exploring automatic extraction and refinement of methods from domain texts in combination with online learning.
  • Improved integration of LLMs for proposing not just task decompositions but also operator and effect models, with formal verification steps maintaining system soundness (Xu et al., 17 Nov 2025).

These directions extend the HTN planning paradigm by augmenting static hierarchical structure with data-driven, adaptive, and verification-anchored method synthesis.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Hierarchical Task Network (HTN) Planning Framework.