Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Planning and Learning with Stochastic Action Sets (1805.02363v2)

Published 7 May 2018 in cs.AI

Abstract: In many practical uses of reinforcement learning (RL) the set of actions available at a given state is a random variable, with realizations governed by an exogenous stochastic process. Somewhat surprisingly, the foundations for such sequential decision processes have been unaddressed. In this work, we formalize and investigate MDPs with stochastic action sets (SAS-MDPs) to provide these foundations. We show that optimal policies and value functions in this model have a structure that admits a compact representation. From an RL perspective, we show that Q-learning with sampled action sets is sound. In model-based settings, we consider two important special cases: when individual actions are available with independent probabilities; and a sampling-based model for unknown distributions. We develop poly-time value and policy iteration methods for both cases; and in the first, we offer a poly-time linear programming solution.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Craig Boutilier (78 papers)
  2. Alon Cohen (24 papers)
  3. Amit Daniely (50 papers)
  4. Avinatan Hassidim (66 papers)
  5. Yishay Mansour (158 papers)
  6. Ofer Meshi (14 papers)
  7. Martin Mladenov (22 papers)
  8. Dale Schuurmans (112 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.