Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Randomized Greedy Learning for Non-monotone Stochastic Submodular Maximization Under Full-bandit Feedback (2302.01324v1)

Published 2 Feb 2023 in cs.LG, cs.AI, and math.OC

Abstract: We investigate the problem of unconstrained combinatorial multi-armed bandits with full-bandit feedback and stochastic rewards for submodular maximization. Previous works investigate the same problem assuming a submodular and monotone reward function. In this work, we study a more general problem, i.e., when the reward function is not necessarily monotone, and the submodularity is assumed only in expectation. We propose Randomized Greedy Learning (RGL) algorithm and theoretically prove that it achieves a $\frac{1}{2}$-regret upper bound of $\tilde{\mathcal{O}}(n T{\frac{2}{3}})$ for horizon $T$ and number of arms $n$. We also show in experiments that RGL empirically outperforms other full-bandit variants in submodular and non-submodular settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Fares Fourati (12 papers)
  2. Vaneet Aggarwal (222 papers)
  3. Christopher John Quinn (7 papers)
  4. Mohamed-Slim Alouini (524 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.