Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Environment Meta-Learning in Stochastic Linear Bandits (2205.06326v1)

Published 12 May 2022 in cs.LG

Abstract: In this work we investigate meta-learning (or learning-to-learn) approaches in multi-task linear stochastic bandit problems that can originate from multiple environments. Inspired by the work of [1] on meta-learning in a sequence of linear bandit problems whose parameters are sampled from a single distribution (i.e., a single environment), here we consider the feasibility of meta-learning when task parameters are drawn from a mixture distribution instead. For this problem, we propose a regularized version of the OFUL algorithm that, when trained on tasks with labeled environments, achieves low regret on a new task without requiring knowledge of the environment from which the new task originates. Specifically, our regret bound for the new algorithm captures the effect of environment misclassification and highlights the benefits over learning each task separately or meta-learning without recognition of the distinct mixture components.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ahmadreza Moradipari (18 papers)
  2. Mohammad Ghavamzadeh (97 papers)
  3. Taha Rajabzadeh (7 papers)
  4. Christos Thrampoulidis (79 papers)
  5. Mahnoosh Alizadeh (58 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.