Transfer Learning with Partially Observable Offline Data via Causal Bounds (2308.03572v4)
Abstract: Transfer learning has emerged as an effective approach to accelerate learning by integrating knowledge from related source agents. However, challenges arise due to data heterogeneity-such as differences in feature sets or incomplete datasets-which often results in the nonidentifiability of causal effects. In this paper, we investigate transfer learning in partially observable contextual bandits, where agents operate with incomplete information and limited access to hidden confounders. To address the challenges posed by unobserved confounders, we formulate optimization problems to derive tight bounds on the nonidentifiable causal effects. We then propose an efficient method that discretizes the functional constraints of unknown distributions into linear constraints, allowing us to sample compatible causal models through a sequential process of solving linear programs. This method takes into account estimation errors and exhibits strong convergence properties, ensuring robust and reliable causal bounds. Leveraging these causal bounds, we improve classical bandit algorithms, achieving tighter regret upper and lower bounds relative to the sizes of action sets and function spaces. In tasks involving function approximation, which are crucial for handling complex context spaces, our method significantly improves the dependence on function space size compared to previous work. We formally prove that our causally enhanced algorithms outperform classical bandit algorithms, achieving notably faster convergence rates. The applicability of our approach is further illustrated through an example of offline pricing policy learning with censored demand. Simulations confirm the superiority of our approach over state-of-the-art methods, demonstrating its potential to enhance contextual bandit agents in real-world applications, especially when data is scarce, costly, or restricted due to privacy concerns.