Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Efficient Algorithm for Fair Multi-Agent Multi-Armed Bandit with Low Regret (2209.11817v1)

Published 23 Sep 2022 in cs.LG and cs.DS

Abstract: Recently a multi-agent variant of the classical multi-armed bandit was proposed to tackle fairness issues in online learning. Inspired by a long line of work in social choice and economics, the goal is to optimize the Nash social welfare instead of the total utility. Unfortunately previous algorithms either are not efficient or achieve sub-optimal regret in terms of the number of rounds $T$. We propose a new efficient algorithm with lower regret than even previous inefficient ones. For $N$ agents, $K$ arms, and $T$ rounds, our approach has a regret bound of $\tilde{O}(\sqrt{NKT} + NK)$. This is an improvement to the previous approach, which has regret bound of $\tilde{O}( \min(NK, \sqrt{N} K{3/2})\sqrt{T})$. We also complement our efficient algorithm with an inefficient approach with $\tilde{O}(\sqrt{KT} + N2K)$ regret. The experimental findings confirm the effectiveness of our efficient algorithm compared to the previous approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Matthew Jones (18 papers)
  2. Huy LĂȘ Nguyen (28 papers)
  3. Thy Nguyen (6 papers)
Citations (3)