Papers
Topics
Authors
Recent
Search
2000 character limit reached

HDDL: Hierarchical Domain Definition Language

Updated 27 January 2026
  • HDDL is a formal, PDDL-based language that encodes HTN planning domains by modeling both primitive actions and compound tasks.
  • It extends classical planning techniques with temporal, numeric, and constraint-rich constructs to address complex real-world applications.
  • Recent advancements in HDDL include automated domain learning and seamless integration with hybrid AI frameworks, enhancing planner interoperability.

The Hierarchical Domain Definition Language (HDDL) is a formal, PDDL-based specification language designed to encode Hierarchical Task Network (HTN) planning domains. It enables the explicit modeling of both primitive actions and hierarchical decompositions, providing a lingua franca for domain-independent HTN planners. HDDL supports classical, symbolic planning as well as recent extensions to numeric and temporal reasoning, making it a central tool for both theoretical research and applied AI planning in robotics, industrial automation, and integrated reinforcement learning environments (Höller et al., 2019, Pellier et al., 2022, Pellier et al., 2023, La et al., 28 May 2025, Mu et al., 2023).

1. Formal Syntax and Core Constructs

HDDL systematically extends the standard PDDL grammar with constructs for explicit compound tasks and decomposition methods, mirroring the theoretical structure of HTN planning. A classical HDDL domain is formally defined as a tuple D=(L,TP,TC,M)D = (L, T_P, T_C, M), where LL is a typed first-order logic language, TPT_P the set of primitive (groundable) actions, TCT_C the set of compound (abstract) tasks, and MM a set of decomposition methods (Höller et al., 2019, La et al., 28 May 2025).

Grammar Overview

The key elements of HDDL syntax (EBNF; abridged):

  • Primitive actions:

1
2
3
4
(:action <name>
  :parameters (<typed-list>)
  :precondition <gd>
  :effect <gd>)

  • Compound tasks:

1
(:task <name> :parameters (<typed-list>))

  • Methods:

1
2
3
4
5
6
7
(:method <name>
  :parameters (<typed-list>)
  :task (<compound-task>)
  [:precondition <gd>]
  [:subtasks (and <task_1> ... <task_n>)]
  [:ordering (and (<id1> < <id2>) ...)]
  [:constraints (and ...)])

  • Problem specification:

Includes :htn block to set the initial task network and ordering.

Compound task and method declarations are mandatory for valid HTN encoding. Partial and total subtask-orderings are supported via the :ordering section. Variable-constraint blocks in methods serve to isolate parameter equality/inequality constraints from state conditions (Höller et al., 2019).

2. Semantics of HTN Planning in HDDL

The semantics of HDDL are grounded in the standard model of HTN planning, where planning operates over task networks: tuples (I,≺,α,VC)(I, \prec, \alpha, \text{VC}) with identifiers II, a strict partial order ≺\prec, labelings α\alpha mapping each task instance to a symbol, and variable-constraints VC. Planning proceeds by recursively refining compound tasks using methods until a primitive, executable sequence is produced.

  • Action execution: Primitive tasks (name(a),pre(a),eff(a))(\text{name}(a), \text{pre}(a), \text{eff}(a)) are only applicable if pre(a)\text{pre}(a) holds in the current state; their effects are applied atomically.
  • Method application: A method m=(namem,prem,subtasksm,orderingm)m = (\text{name}_m, \text{pre}_m, \text{subtasks}_m, \text{ordering}_m) replaces a matching compound task node by its network of subtasks, preserving declared ordering and constraints.
  • Solution criterion: A plan is a fully-primitive sequence reachable via valid decompositions and applicable in the initial state, optionally satisfying a goal formula (Höller et al., 2019, La et al., 28 May 2025).

Well-formedness mandates that ordering constraints induce a DAG, parameter lists are strictly typed, and no name conflicts exist across actions/methods.

3. Extensions: Temporal, Numeric, and Constraint-Rich HDDL

HDDL 2.1 extends the base language to capture temporal and numeric domains by importing semantic primitives from PDDL 2.1/3.0 and ANML. New constructs include durative actions and methods, numeric fluents, and state/trajectory constraints at the method and plan level (Pellier et al., 2022, Pellier et al., 2023).

Feature HDDL 1.0 HDDL 2.1 PDDL 2.1 ANML
Hierarchy (compound tasks) ✓ ✓ ✗ ✓
Durative actions/methods ✗ ✓ ✓ (actions) ✓
Numeric fluents ✗ ✓ ✓ ✓
Timed literals ✗ ✓ ✓ ✓
State/trajectory constraints ✗ ✓ (methods) PDDL 3.0 only ✓ (richer)

Durative actions in HDDL 2.1 have the structure:

1
2
3
4
5
(:durative-action <name>
  :parameters (<typed-list>)
  :duration [=, ≥, ≤, ...]
  :condition (:at start ...) (:over all ...) (:at end ...)
  :effect (:at start ...) (:at end ...))
Methods can declare duration and state constraints, including qualitative temporal constraints (<, >, =), state-trajectory predicates (e.g., hold-before, hold-during), and duration relationships among subtasks. Semantics inherit from PDDL 2.1: at a plan time-point tt, all <:at start> preconditions must hold; over-all invariants hold throughout an interval; end preconditions are checked at t+δt+\delta (Pellier et al., 2023).

4. Automated HDDL Domain Learning

Recent research targets the acquisition of HDDL domains from observation, ameliorating the bottleneck of manual HTN encoding. The HierAMLSI algorithm provides a grammar induction framework for learning both STRIPS-level action models and HTN methods using positive and negative plan traces (Grand et al., 2022).

  • Learning objective: Find a minimal DFA Σ\Sigma such that all positively observed primitive traces are accepted and negative traces are rejected.
  • Algorithm: RPNI-style state merging on the prefix tree of traces; build a Task DFA by annotating compound-task decompositions as transitions. Subsequently, infer preconditions/effects of actions via intersection over DFA transition states; heuristically minimize the number of learned methods using a greedy set cover.
  • Noise robustness: Tabu search is used to tune preconditions/effects for consistency under noisy or partial observations.

Empirical results demonstrate that, with 600 training tasks, 100% accuracy is reached for method learning across five IPC HTN domains—even under substantial noise or partial observability. Learning both actions and methods remains robust for moderate sizes (>300>300 tasks yield >90%>90\% accuracy) (Grand et al., 2022).

5. Applications in Symbolic and Hybrid AI

HDDL is central to modern symbolic planning benchmarks and is used natively in International Planning Competition domains such as Transport, Gripper, and Blocksworld. The language's formal structure underpins automated domain learning, transfer learning, and tool interoperability (Höller et al., 2019, Grand et al., 2022).

In integrated symbolic-symbolic AI, HDDL domains enable direct mapping to hierarchical option sets in reinforcement learning, as seen in the HTN-guided SOMARL framework (Mu et al., 2023):

  • Each method in MM is mapped one-to-one to a symbolic option for higher-level control.
  • The meta-controller assigns options, driving agents through the hierarchy while allowing RL at the low level.
  • Intrinsic reward shaping and adaptive planning are enabled by propagating cumulative option success/reward statistics into the HTN planning heuristic, which in turn modifies future decompositions.

In multi-agent RL, HDDL provides a common substrate for collaborative scenario definition and directly usable environment instantiations via platforms such as HDDLGym, where HDDL domains are parsed into OpenAI Gym-compliant state-action spaces (La et al., 28 May 2025). This supports RL research on coordinated planning, MARL, and learning in hierarchical domains (e.g., Overcooked, Transport).

6. Design Decisions, Trade-offs, and Limitations

The design of HDDL prioritizes PDDL syntax compatibility, extensibility, and a minimal core feature set for cross-planner interoperability. Its main advantages include:

  • Ease of model sharing and tool-chain integration.
  • Support for both totally and partially ordered task networks.
  • Verifiable well-formedness and type safety.
  • Extensible grammar for future temporal, numeric, or preference constructs (Höller et al., 2019, Pellier et al., 2023).

Limitations reflect ongoing challenges:

  • Classic HDDL lacks native support for continuous effects and preferences (planned for future levels).
  • The extension to temporal/numeric domains increases computational complexity (PSPACE/EXPSPACE-hardness in general).
  • Plan validation for full HTN+temporal/numeric semantics is nontrivial.
  • Parsing and observation space scalability in RL environments require careful engineering to avoid combinatorial blow-up (see HDDLGym design) (La et al., 28 May 2025).

7. Impact and Future Directions

HDDL has established itself as the de facto standard for expressing HTN domains in symbolic planning, and its ongoing evolution (e.g., HDDL 2.1) continues to align the language with advances in both symbolic and learning-based AI. Future research directions include:

  • Standardization of temporal/metric HTN benchmarks and validation protocols.
  • Deeper integration with decentralized or multi-agent planning frameworks.
  • Synergistic methods for learning HDDL models from demonstration or RL backend traces.
  • Extension to richer preference/axiom languages and deployment in safety-critical, real-time domains (Pellier et al., 2022, Pellier et al., 2023, La et al., 28 May 2025).

Through its formal rigor, extensibility, and active adoption by the research community, HDDL provides a critical foundation for both theoretical and applied research in hierarchical planning, automated learning, and hybrid symbolic-numeric agent design.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Hierarchical Domain Definition Language (HDDL).