Papers
Topics
Authors
Recent
2000 character limit reached

Stochastic Shortest Path Games and Q-Learning (1412.8570v1)

Published 30 Dec 2014 in math.OC

Abstract: We consider a class of two-player zero-sum stochastic games with finite state and compact control spaces, which we call stochastic shortest path (SSP) games. They are undiscounted total cost stochastic dynamic games that have a cost-free termination state. Exploiting the close connection of these games to single-player SSP problems, we introduce novel model conditions under which we show that the SSP games have strong optimality properties, including the existence of a unique solution to the dynamic programming equation, the existence of optimal stationary policies, and the convergence of value and policy iteration. We then focus on finite state and control SSP games and the classical Q-learning algorithm for computing the value function. Q-learning is a model-free, asynchronous stochastic iterative algorithm. By the theory of stochastic approximation involving monotone nonexpansive mappings, it is known to converge when its associated dynamic programming equation has a unique solution and its iterates are bounded with probability one. For the SSP case, as the main result of this paper, we prove the boundedness of the Q-learning iterates under our proposed model conditions, thereby establishing completely the convergence of Q-learning for a broad class of total cost finite-space stochastic games.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.