Average Cost Optimality of Partially Observed MDPS: Contraction of Non-linear Filters, Optimal Solutions and Approximations
Abstract: The average cost optimality is known to be a challenging problem for partially observable stochastic control, with few results available beyond the finite state, action, and measurement setup, for which somewhat restrictive conditions are available. In this paper, we present explicit and easily testable conditions for the existence of solutions to the average cost optimality equation where the state space is compact. In particular, we present a new contraction based analysis, which is new to the literature to our knowledge, building on recent regularity results for non-linear filters. Beyond establishing existence, we also present several implications of our analysis that are new to the literature: (i) robustness to incorrect priors (ii) near optimality of policies based on quantized approximations, (iii) near optimality of policies with finite memory, and (iv) convergence in Q-learning. In addition to our main theorem, each of these represents a novel contribution for average cost criteria.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.