Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed Cooperative Decision-Making in Multiarmed Bandits: Frequentist and Bayesian Algorithms (1606.00911v3)

Published 2 Jun 2016 in cs.SY, cs.LG, and math.OC

Abstract: We study distributed cooperative decision-making under the explore-exploit tradeoff in the multiarmed bandit (MAB) problem. We extend the state-of-the-art frequentist and Bayesian algorithms for single-agent MAB problems to cooperative distributed algorithms for multi-agent MAB problems in which agents communicate according to a fixed network graph. We rely on a running consensus algorithm for each agent's estimation of mean rewards from its own rewards and the estimated rewards of its neighbors. We prove the performance of these algorithms and show that they asymptotically recover the performance of a centralized agent. Further, we rigorously characterize the influence of the communication graph structure on the decision-making performance of the group.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Peter Landgren (3 papers)
  2. Vaibhav Srivastava (53 papers)
  3. Naomi Ehrich Leonard (61 papers)
Citations (106)

Summary

We haven't generated a summary for this paper yet.