Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Noisy Symbolic Abstractions for Deep RL: A case study with Reward Machines (2211.10902v2)

Published 20 Nov 2022 in cs.LG, cs.AI, and cs.FL

Abstract: Natural and formal languages provide an effective mechanism for humans to specify instructions and reward functions. We investigate how to generate policies via RL when reward functions are specified in a symbolic language captured by Reward Machines, an increasingly popular automaton-inspired structure. We are interested in the case where the mapping of environment state to a symbolic (here, Reward Machine) vocabulary -- commonly known as the labelling function -- is uncertain from the perspective of the agent. We formulate the problem of policy learning in Reward Machines with noisy symbolic abstractions as a special class of POMDP optimization problem, and investigate several methods to address the problem, building on existing and new techniques, the latter focused on predicting Reward Machine state, rather than on grounding of individual symbols. We analyze these methods and evaluate them experimentally under varying degrees of uncertainty in the correct interpretation of the symbolic vocabulary. We verify the strength of our approach and the limitation of existing methods via an empirical investigation on both illustrative, toy domains and partially observable, deep RL domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Andrew C. Li (6 papers)
  2. Zizhao Chen (6 papers)
  3. Pashootan Vaezipoor (13 papers)
  4. Toryn Q. Klassen (11 papers)
  5. Rodrigo Toro Icarte (14 papers)
  6. Sheila A. McIlraith (22 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.