Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 129 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Meta Learning in Bandits within Shared Affine Subspaces (2404.00688v1)

Published 31 Mar 2024 in cs.LG and stat.ML

Abstract: We study the problem of meta-learning several contextual stochastic bandits tasks by leveraging their concentration around a low-dimensional affine subspace, which we learn via online principal component analysis to reduce the expected regret over the encountered bandits. We propose and theoretically analyze two strategies that solve the problem: One based on the principle of optimism in the face of uncertainty and the other via Thompson sampling. Our framework is generic and includes previously proposed approaches as special cases. Besides, the empirical results show that our methods significantly reduce the regret on several bandit tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. Improved algorithms for linear stochastic bandits. In Advances in Neural Information Processing Systems.
  2. Thompson sampling for contextual bandits with linear payoffs. In Proceedings of the 30th International Conference on Machine Learning.
  3. Aiolli, F. (2012). Transfer learning by kernel meta-learning. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning.
  4. Meta-learning by adjusting priors based on extended PAC-Bayes theory. In Proceedings of the 35th International Conference on Machine Learning.
  5. Data-driven online recommender systems with costly information acquisition. IEEE Trans. Serv. Comput.
  6. Sequential transfer in multi-armed bandit with finite set of models. Advances in Neural Information Processing Systems.
  7. Non-stationary bandits and meta-learning with a small set of optimal arms. arXiv preprint arXiv:2202.13001.
  8. Meta-learning adversarial bandits. arXiv preprint arXiv:2205.14128.
  9. Meta Dynamic Pricing: Transfer Learning Across Experiments.
  10. No Regrets for Learning the Prior in Bandits. In Advances in Neural Information Processing Systems.
  11. Baxter, J. (2000). A model of inductive bias learning. Journal of artificial intelligence research.
  12. Hypothesis transfer in bandits by weighted models. In Machine Learning and Knowledge Discovery in Databases.
  13. Survey on applications of multi-armed and contextual bandits. In 2020 IEEE Congress on Evolutionary Computation (CEC).
  14. Differentiable meta-learning of bandit policies. Advances in Neural Information Processing Systems.
  15. Online principal component analysis in high dimension: Which algorithm to choose?
  16. Meta-learning with stochastic linear bandits. In International Conference on Machine Learning. PMLR.
  17. Meta representation learning with contextual linear bandits. arXiv preprint arXiv:2205.15100.
  18. Multi-task representation learning with stochastic linear bandits.
  19. Multi-task and meta-learning with sparse linear bandits. In Uncertainty in Artificial Intelligence. PMLR.
  20. Contextual bandits with linear payoff functions. In AISTATS.
  21. Learning to learn around a common mean. Advances in Neural Information Processing Systems.
  22. Online meta-learning. In International Conference on Machine Learning. PMLR.
  23. Glowacka, D. et al. (2019). Bandit algorithms in information retrieval. Foundations and Trends® in Information Retrieval.
  24. A bound on tail probabilities for quadratic forms in independent random variables. The Annals of Mathematical Statistics.
  25. A tail inequality for quadratic forms of subgaussian random vectors. Electronic Communications in Probability.
  26. Automated machine learning: methods, systems, challenges. Springer Nature.
  27. Subspace learning for effective meta-learning. In International Conference on Machine Learning. PMLR.
  28. Meta-Learning Hypothesis Spaces for Sequential Decision-making. ArXiv.
  29. Meta-thompson sampling. In International Conference on Machine Learning. PMLR.
  30. Meta-learning bandit policies by gradient ascent. arXiv e-prints.
  31. Least squares estimates in stochastic regression models with applications to identification and control of dynamic systems. The Annals of Statistics.
  32. The epoch-greedy algorithm for multi-armed bandits with side information. Advances in neural information processing systems.
  33. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web.
  34. Mezzadri, F. (2006). How to generate random matrices from the classical compact groups. arXiv preprint math-ph/0609050.
  35. Linear combinatorial semi-bandit with causally related rewards. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22.
  36. Metalearning Linear Bandits by Prior Update. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics.
  37. Robbins, H. (1952). Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society.
  38. Pacoh: Bayes-optimal meta-learning with pac-guarantees. In International Conference on Machine Learning. PMLR.
  39. A tutorial on thompson sampling. Foundations and Trends® in Machine Learning.
  40. Lifelong Bandit Optimization: No Prior and No Regret.
  41. Learning theory estimates via integral operators and their approximations. Constructive Approximation.
  42. Thompson, W. R. (1933). On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika.
  43. Thrun, S. (1998). Lifelong learning algorithms. In Learning to learn, pages 181–209. Springer.
  44. Vershynin, R. (2012). Introduction to the non-asymptotic analysis of random matrices. Cambridge University Press.
  45. Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning.
  46. Impact of representation learning in linear bandits. arXiv preprint arXiv:2010.06531.
  47. Differentiable linear bandit algorithm. arXiv preprint arXiv:2006.03000.
  48. Laplacian-regularized graph bandits: Algorithms and theoretical analysis. In International Conference on Artificial Intelligence and Statistics. PMLR.
  49. A useful variant of the davis—kahan theorem for statisticians. Biometrika.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 1 like.

Upgrade to Pro to view all of the tweets about this paper: