Measurized Markov Decision Processes (2405.03888v4)
Abstract: In this paper, we explore lifting Markov Decision Processes (MDPs) to the space of probability measures and consider the so-called measurized MDPs - deterministic processes where states are probability measures on the original state space, and actions are stochastic kernels on the original action space. We show that measurized MDPs are a generalization of stochastic MDPs, thus the measurized framework can be deployed without loss of fidelity. Bertsekas and Shreve studied similar deterministic MDPs under the discounted infinite-horizon criterion in the context of universally measurable policies. Here, we also consider the long-run average reward case, but we cast lifted MDPs within the semicontinuous-semicompact framework of Hern\'andez-Lerma and Lasserre. This makes the lifted framework more accessible as it entails (i) optimal Borel-measurable value functions and policies, (ii) reasonably mild assumptions that are easier to verify than those in the universally-measurable framework, and (iii) simpler proofs. In addition, we showcase the untapped potential of lifted MDPs by demonstrating how the measurized framework enables the incorporation of constraints and value function approximations that are not available from the standard MDP setting. Furthermore, we introduce a novel algebraic lifting procedure for any MDP, showing that non-deterministic measure-valued MDPs can emerge from lifting MDPs impacted by external random shocks.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.