Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning First-Order Symbolic Representations for Planning from the Structure of the State Space (1909.05546v3)

Published 12 Sep 2019 in cs.AI

Abstract: One of the main obstacles for developing flexible AI systems is the split between data-based learners and model-based solvers. Solvers such as classical planners are very flexible and can deal with a variety of problem instances and goals but require first-order symbolic models. Data-based learners, on the other hand, are robust but do not produce such representations. In this work we address this split by showing how the first-order symbolic representations that are used by planners can be learned from non-symbolic inputs that encode the structure of the state space. The representation learning problem is formulated as the problem of inferring planning instances over a common but unknown first-order domain that account for the structure of the observed state space. This means to infer a complete first-order representation (i.e. general action schemas, relational symbols, and objects) that explains the observed state space structures. The inference problem is cast as a two-level combinatorial search where the outer level searches for values of a small set of hyperparameters and the inner level, solved via SAT, searches for a first-order symbolic model. The framework is shown to produce general and correct first-order representations for standard problems like Gripper, Blocksworld, and Hanoi from input graphs that encode the flat state-space structure of a single instance.

Citations (50)

Summary

  • The paper presents a novel two-level combinatorial search method using a SAT solver to infer first-order symbolic planning models from unsymbolic state space graphs.
  • The method uniquely uses plain state space graphs as input, differing from traditional approaches that rely on traces or direct state contents, allowing inference of general action schemas and objects.
  • Experimental results show the approach successfully reconstructs meaningful symbolic representations across standard domains like Gripper and Blocksworld, demonstrating its efficacy in bridging symbolic and sub-symbolic AI paradigms.

Learning First-Order Symbolic Representations for Planning from the Structure of the State Space

The paper by Blai Bonet and Hector Geffner presents a pivotal contribution to artificial intelligence, exploring the interface between data-driven learning and model-based problem solving. The authors tackle the longstanding challenge in AI: the integration of flexible model-based solvers, such as classical planners requiring symbolic representations, with data-based learners that typically yield opaque models devoid of symbolic clarity.

Core Approach

Bonet and Geffner introduce a novel approach to learn first-order symbolic representations for use in planning, deriving them from unsymbolic inputs that describe the state space structure. The paper outlines a two-level combinatorial search method for inferring planning instances over an unknown first-order domain. At the outer level, the search targets hyperparameters while the inner level employs a SAT solver to construct the symbolic model.

Methodological Insights

The methodology departs from traditional planning and reinforcement learning frameworks by adopting plain state space graphs as input, rather than traces or image data. This approach focuses on inferring general action schemas, relational symbols, and objects from state space structures recorded in labeled directed graphs. Notably, these graphs do not encode the contents of states directly, a feature that significantly differentiates this work from existing visual and perceptual representation learning methods.

Experimental Validation

Bonet and Geffner verify their approach across standard domains such as Gripper, Blocksworld, and Hanoi, using small instance graphs to successfully reconstruct meaningful symbolic representations. These experimental results validate the method's efficacy in generating correct and useful domain models from minimal input data, showcasing the scalability and flexibility of their SAT-based representation inference technique.

Implications and Future Directions

The implications of this research are substantial, offering a pathway to bridge symbolic and sub-symbolic AI paradigms. Practically, the approach promises enhancements in the transparency and reusability of AI models, potentially impacting fields where robust planning capabilities are essential. Theoretically, it stimulates further inquiry into symbolic representation extraction from complex, structured data spaces.

Looking forward, advancements in handling incomplete or non-deterministic input graphs and integrating noise tolerance could broaden the scope of applications, fortifying the paper's methodological basis. Moreover, pursuing representation grounding in perceptual input remains a vital challenge, underscoring the interplay between symbolic understanding and sensory experience in AI.

In sum, Bonet and Geffner's work represents a significant step in AI's evolution, enriching symbolic planning through a substantive alignment with data-driven learning methodologies. Their robust framework paves the way for innovative pathways in planning, improvement in real-world AI applications, and foundational advances in intelligence system design.

Youtube Logo Streamline Icon: https://streamlinehq.com