Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploiting Fast Decaying and Locality in Multi-Agent MDP with Tree Dependence Structure (1909.06900v1)

Published 15 Sep 2019 in math.OC and cs.LG

Abstract: This paper considers a multi-agent Markov Decision Process (MDP), where there are $n$ agents and each agent $i$ is associated with a state $s_i$ and action $a_i$ taking values from a finite set. Though the global state space size and action space size are exponential in $n$, we impose local dependence structures and focus on local policies that only depend on local states, and we propose a method that finds nearly optimal local policies in polynomial time (in $n$) when the dependence structure is a one directional tree. The algorithm builds on approximated reward functions which are evaluated using locally truncated Markov process. Further, under some special conditions, we prove that the gap between the approximated reward function and the true reward function is decaying exponentially fast as the length of the truncated Markov process gets longer. The intuition behind this is that under some assumptions, the effect of agent interactions decays exponentially in the distance between agents, which we term "fast decaying property".

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Guannan Qu (48 papers)
  2. Na Li (227 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.