Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Combinatorial Bandits with Switching Costs (2404.01883v1)

Published 2 Apr 2024 in stat.ML and cs.LG

Abstract: We study the problem of adversarial combinatorial bandit with a switching cost $\lambda$ for a switch of each selected arm in each round, considering both the bandit feedback and semi-bandit feedback settings. In the oblivious adversarial case with $K$ base arms and time horizon $T$, we derive lower bounds for the minimax regret and design algorithms to approach them. To prove these lower bounds, we design stochastic loss sequences for both feedback settings, building on an idea from previous work in Dekel et al. (2014). The lower bound for bandit feedback is $ \tilde{\Omega}\big( (\lambda K){\frac{1}{3}} (TI){\frac{2}{3}}\big)$ while that for semi-bandit feedback is $ \tilde{\Omega}\big( (\lambda K I){\frac{1}{3}} T{\frac{2}{3}}\big)$ where $I$ is the number of base arms in the combinatorial arm played in each round. To approach these lower bounds, we design algorithms that operate in batches by dividing the time horizon into batches to restrict the number of switches between actions. For the bandit feedback setting, where only the total loss of the combinatorial arm is observed, we introduce the Batched-Exp2 algorithm which achieves a regret upper bound of $\tilde{O}\big((\lambda K){\frac{1}{3}}T{\frac{2}{3}}I{\frac{4}{3}}\big)$ as $T$ tends to infinity. In the semi-bandit feedback setting, where all losses for the combinatorial arm are observed, we propose the Batched-BROAD algorithm which achieves a regret upper bound of $\tilde{O}\big( (\lambda K){\frac{1}{3}} (TI){\frac{2}{3}}\big)$.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. S. Guha and K. Munagala, “Multi-armed bandits with metric switching costs,” in Automata, Languages and Programming: 36th International Colloquium, ICALP 2009, Rhodes, Greece, July 5-12, 2009, Proceedings, Part II 36.   Springer Berlin Heidelberg, 2009, pp. 496–507.
  2. M. Shi, X. Lin, and L. Jiao, “Power-of-2-arms for bandit learning with switching costs,” in Proceedings of the Twenty-Third International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, 2022, pp. 131–140.
  3. R. Arora, O. Dekel, and A. Tewari, “Online bandit learning against an adaptive adversary: From regret to policy regret,” in Proceedings of the 29th International Coference on International Conference on Machine Learning.   PMLR, 2012, pp. 1747–1754.
  4. O. Dekel, J. Ding, T. Koren, and Y. Peres, “Bandits with switching costs: T2/3superscript𝑇23T^{2/3}italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT regret,” in Proceedings of the 46th Annual ACM Symposium on Theory of Computing, 2014, pp. 459–467.
  5. C. Rouyer, Y. Seldin, and N. Cesa-Bianchi, “An algorithm for stochastic and adversarial bandits with switching costs,” in International Conference on Machine Learning.   PMLR, 2021, pp. 9127–9135.
  6. P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire, “The nonstochastic multiarmed bandit problem,” SIAM Journal on Computing, vol. 32, no. 1, pp. 48–77, 2002.
  7. S. Bubeck, “Introduction to online optimization,” Lecture notes, vol. 2, pp. 1–86, 2011.
  8. C.-Y. Wei and H. Luo, “More adaptive algorithms for adversarial bandits,” in Conference On Learning Theory.   PMLR, 2018, pp. 1263–1291.
  9. R. Combes, M. Sadegh Talebi, A. Proutiere, and M. Lelarge, “Combinatorial bandits revisited,” Advances in Neural Information Processing Systems, vol. 28, 2015.
  10. A. C.-C. Yao, “Probabilistic computations: Toward a unified measure of complexity,” in 18th Annual Symposium on Foundations of Computer Science (sfcs 1977).   IEEE Computer Society, 1977, pp. 222–227.
  11. J.-Y. Audibert, S. Bubeck, and G. Lugosi, “Regret in online combinatorial optimization,” Mathematics of Operations Research, vol. 39, no. 1, pp. 31–45, 2014.
  12. J. Zimmert, H. Luo, and C.-Y. Wei, “Beating stochastic and adversarial semi-bandits optimally and simultaneously,” in International Conference on Machine Learning.   PMLR, 2019, pp. 7683–7692.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Yanyan Dong (5 papers)
  2. Vincent Y. F. Tan (205 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com