Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Forecaster: Towards Temporally Abstract Tree-Search Planning from Pixels (2310.09997v1)

Published 16 Oct 2023 in cs.AI, cs.LG, cs.SY, and eess.SY

Abstract: The ability to plan at many different levels of abstraction enables agents to envision the long-term repercussions of their decisions and thus enables sample-efficient learning. This becomes particularly beneficial in complex environments from high-dimensional state space such as pixels, where the goal is distant and the reward sparse. We introduce Forecaster, a deep hierarchical reinforcement learning approach which plans over high-level goals leveraging a temporally abstract world model. Forecaster learns an abstract model of its environment by modelling the transitions dynamics at an abstract level and training a world model on such transition. It then uses this world model to choose optimal high-level goals through a tree-search planning procedure. It additionally trains a low-level policy that learns to reach those goals. Our method not only captures building world models with longer horizons, but also, planning with such models in downstream tasks. We empirically demonstrate Forecaster's potential in both single-task learning and generalization to new tasks in the AntMaze domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Thomas Jiralerspong (12 papers)
  2. Flemming Kondrup (3 papers)
  3. Doina Precup (206 papers)
  4. Khimya Khetarpal (25 papers)

Summary

We haven't generated a summary for this paper yet.