Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 34 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 80 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 461 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Constrained discounted stochastic games (2112.07960v1)

Published 15 Dec 2021 in math.OC

Abstract: In this paper, we consider a large class of constrained non-cooperative stochastic Markov games with countable state spaces and discounted cost criteria. In one-player case, i.e., constrained discounted Markov decision models, it is possible to formulate a static optimisation problem whose solution determines a stationary optimal strategy (alias control or policy) in the dynamical infinite horizon model. This solution lies in the compact convex set of all occupation measures induced by strategies, defined on the set of state-action pairs. In case of n-person discounted games the occupation measures are induced by strategies of all players. Therefore, it is difficult to generalise the approach for constrained discounted Markov decision processes directly. It is not clear how to define the domain for the best-response correspondence whose fixed point induces a stationary equilibrium in the Markov game. This domain should be the Cartesian product of compact convex sets in locally convex topological vector spaces. One of our main results shows how to overcome this difficulty and define a constrained non-cooperative static game whose Nash equilibrium induces by a stationary Nash equilibrium in the Markov game. This is done for games with bounded cost functions and positive initial state distribution. An extension to a class of Markov games with unbounded costs and arbitrary initial state distribution relies on approximation of the unbounded game by bounded ones with positive initial state distributions. In the unbounded case, we assume the uniform integrability of the discounted costs with respect to all probability measures induced by strategies of the players, defined on the space of plays (histories) of the game. Our assumptions are weaker than those applied in earlier works on discounted dynamic programming or stochastic games using so-called weighted norm approaches.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.