On the price of exact truthfulness in incentive-compatible online learning with bandit feedback: A regret lower bound for WSU-UX (2404.05155v1)
Abstract: In one view of the classical game of prediction with expert advice with binary outcomes, in each round, each expert maintains an adversarially chosen belief and honestly reports this belief. We consider a recently introduced, strategic variant of this problem with selfish (reputation-seeking) experts, where each expert strategically reports in order to maximize their expected future reputation based on their belief. In this work, our goal is to design an algorithm for the selfish experts problem that is incentive-compatible (IC, or \emph{truthful}), meaning each expert's best strategy is to report truthfully, while also ensuring the algorithm enjoys sublinear regret with respect to the expert with the best belief. Freeman et al. (2020) recently studied this problem in the full information and bandit settings and obtained truthful, no-regret algorithms by leveraging prior work on wagering mechanisms. While their results under full information match the minimax rate for the classical ("honest experts") problem, the best-known regret for their bandit algorithm WSU-UX is $O(T{2/3})$, which does not match the minimax rate for the classical ("honest bandits") setting. It was unclear whether the higher regret was an artifact of their analysis or a limitation of WSU-UX. We show, via explicit construction of loss sequences, that the algorithm suffers a worst-case $\Omega(T{2/3})$ lower bound. Left open is the possibility that a different IC algorithm obtains $O(\sqrt{T})$ regret. Yet, WSU-UX was a natural choice for such an algorithm owing to the limited design room for IC algorithms in this setting.
- The multiplicative weights update method: a meta-algorithm and applications. Theory of Computing, 8(1):121–164, 2012.
- The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002.
- Loss functions for binary class probability estimation and classification: Structure and applications. Working draft, November, 3:13, 2005.
- Potential-based algorithms in on-line prediction and game theory. Machine Learning, 51:239–261, 2003.
- No-regret and incentive-compatible online learning. In International Conference on Machine Learning, pages 3270–3279. PMLR, 2020.
- A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997.
- Efficient competitions and online learning with strategic forecasters. In Proceedings of the 22nd ACM Conference on Economics and Computation, pages 479–496, 2021.
- A second-order bound with excess losses. In Conference on Learning Theory, pages 176–196. PMLR, 2014.
- Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359–378, 2007.
- Elad Hazan et al. Introduction to online convex optimization. Foundations and Trends® in Optimization, 2(3-4):157–325, 2016.
- Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–63, 1997.
- William Kuszmaul and Qi Qi. The multiplicative version of Azuma’s inequality, with an application to contention analysis. arXiv preprint arXiv:2102.05077, 2021.
- Self-financed wagering mechanisms for forecasting. In Proceedings of the 9th ACM Conference on Electronic Commerce, pages 170–179, 2008.
- Online prediction with selfish experts. Advances in Neural Information Processing Systems, 30, 2017.
- Vladimir G Vovk. A game of prediction with expert advice. In Proceedings of the Eighth Annual Conference on Computational Learning Theory, pages 51–60, 1995.
- Ali Mortazavi (3 papers)
- Junhao Lin (38 papers)
- Nishant A. Mehta (22 papers)