2000 character limit reached
Finite-Horizon Markov Decision Processes with Sequentially-Observed Transitions (1507.01151v1)
Published 4 Jul 2015 in math.OC and cs.SY
Abstract: Markov Decision Processes (MDPs) have been used to formulate many decision-making problems in science and engineering. The objective is to synthesize the best decision (action selection) policies to maximize expected rewards (or minimize costs) in a given stochastic dynamical environment. In this paper, we extend this model by incorporating additional information that the transitions due to actions can be sequentially observed. The proposed model benefits from this information and produces policies with better performance than those of standard MDPs. The paper also presents an efficient offline linear programming based algorithm to synthesize optimal policies for the extended model.