Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PEORL: Integrating Symbolic Planning and Hierarchical Reinforcement Learning for Robust Decision-Making (1804.07779v3)

Published 20 Apr 2018 in cs.LG, cs.AI, and stat.ML

Abstract: Reinforcement learning and symbolic planning have both been used to build intelligent autonomous agents. Reinforcement learning relies on learning from interactions with real world, which often requires an unfeasibly large amount of experience. Symbolic planning relies on manually crafted symbolic knowledge, which may not be robust to domain uncertainties and changes. In this paper we present a unified framework {\em PEORL} that integrates symbolic planning with hierarchical reinforcement learning (HRL) to cope with decision-making in a dynamic environment with uncertainties. Symbolic plans are used to guide the agent's task execution and learning, and the learned experience is fed back to symbolic knowledge to improve planning. This method leads to rapid policy search and robust symbolic plans in complex domains. The framework is tested on benchmark domains of HRL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Fangkai Yang (45 papers)
  2. Daoming Lyu (12 papers)
  3. Bo Liu (484 papers)
  4. Steven Gustafson (7 papers)
Citations (131)

Summary

We haven't generated a summary for this paper yet.