Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PDDLGym: Gym Environments from PDDL Problems (2002.06432v2)

Published 15 Feb 2020 in cs.AI

Abstract: We present PDDLGym, a framework that automatically constructs OpenAI Gym environments from PDDL domains and problems. Observations and actions in PDDLGym are relational, making the framework particularly well-suited for research in relational reinforcement learning and relational sequential decision-making. PDDLGym is also useful as a generic framework for rapidly building numerous, diverse benchmarks from a concise and familiar specification language. We discuss design decisions and implementation details, and also illustrate empirical variations between the 20 built-in environments in terms of planning and model-learning difficulty. We hope that PDDLGym will facilitate bridge-building between the reinforcement learning community (from which Gym emerged) and the AI planning community (which produced PDDL). We look forward to gathering feedback from all those interested and expanding the set of available environments and features accordingly. Code: https://github.com/tomsilver/pddlgym

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Tom Silver (31 papers)
  2. Rohan Chitnis (22 papers)
Citations (50)

Summary

PDDLGym: An Overview of Gym Environments from PDDL Problems

The paper "PDDLGym: Gym Environments from PDDL Problems" introduces PDDLGym, a framework that innovatively integrates OpenAI Gym environments with Planning Domain Definition Language (PDDL) tasks. This framework represents a strategic fusion of reinforcement learning constructs and symbolic AI planning tasks, utilizing relational observations and actions as its core feature. The work focuses on facilitating research in relational reinforcement learning and sequential decision-making by providing researchers with a versatile tool for generating diverse benchmarks.

PDDLGym provides a system where environments are generated automatically from PDDL domain and problem files. The conceptual architecture of PDDLGym hinges on utilizing PDDL's symbolic representation strengths to create scenarios within the widely adopted Gym interface. This marriage of PDDL's relational syntax with Gym's interaction model makes PDDLGym particularly potent for conducting research that traverses the boundaries of traditional reinforcement learning and symbolic planning.

The paper explicitly outlines the design principles and implementation mechanics of PDDLGym. In essence, PDDLGym builds a bridge that enables tasks described in PDDL—a language developed for expressing planning tasks in artificial intelligence—to be directly translated into the Gym's environment framework. By doing this, it allows for a seamless interface where planning researchers can compare their methods on RL-style tasks and vice-versa, encouraging cross-pollination of methodologies and insights across research disciplines.

PDDLGym is structured to support episodic interactions where agents receive an observation, perform actions, and perpetuate this loop until an episode's conclusion. This framework leverages the relational characteristics of PDDL, where observations and actions are expressed as sets of ground predicates over objects. This relational structure is imperative for tasks that require understanding the interrelations between entities, a requirement common in many AI applications.

The action space within PDDLGym is carefully articulated, acknowledging the distinction between free and non-free parameters within operators. This distinction is crucial for defining an appropriate action space since it more accurately reflects the choice-making process of agents in planning domains. In line with reinforcement learning conventions, PDDLGym offers robust action sampling methods that ensure only valid actions, as deemed by the operator preconditions, are considered.

PDDLGym's utility is manifold:

  1. It streamlines the creation of benchmark tasks across relational domains, providing a compact and expressive medium for defining these problems through PDDL.
  2. It fosters a comprehensive environment for researchers from reinforcement learning and planning domains to explore algorithms in shared benchmarks, facilitating direct comparison and potentially merging varying approaches.
  3. It opens avenues for further explorations in relational decision-making tasks, enriching research in areas such as learning symbolic operator descriptions and the development of efficient planning strategies through relational configurations.

The environments included in PDDLGym present substantial variation in both planning difficulty and model-learning difficulty. According to the results shared in the paper, environments exhibit a broad spectrum of challenges for both planning tasks and transition model learning. This indicates a wide applicability of the framework in evaluating and advancing both existing and novel algorithms.

In conclusion, PDDLGym represents a practical advancement in the creation and utilization of diverse AI benchmarks. Future developments in PDDLGym might be geared towards expanding its adaptability to more complex PDDL constructs and providing more extensive interface options with online repositories of PDDL tasks. As a tool at the intersection of two significant AI domains, PDDLGym holds promise in catalyzing advanced research trajectories and fostering a closer integration of machine learning and symbolic AI strategies.

Github Logo Streamline Icon: https://streamlinehq.com