Additive Causal Bandits with Unknown Graph (2306.07858v1)
Abstract: We explore algorithms to select actions in the causal bandit setting where the learner can choose to intervene on a set of random variables related by a causal graph, and the learner sequentially chooses interventions and observes a sample from the interventional distribution. The learner's goal is to quickly find the intervention, among all interventions on observable variables, that maximizes the expectation of an outcome variable. We depart from previous literature by assuming no knowledge of the causal graph except that latent confounders between the outcome and its ancestors are not present. We first show that the unknown graph problem can be exponentially hard in the parents of the outcome. To remedy this, we adopt an additional additive assumption on the outcome which allows us to solve the problem by casting it as an additive combinatorial linear bandit problem with full-bandit feedback. We propose a novel action-elimination algorithm for this setting, show how to apply this algorithm to the causal bandit problem, provide sample complexity bounds, and empirically validate our findings on a suite of randomly generated causal models, effectively showing that one does not need to explicitly learn the parents of the outcome to identify the best intervention.
- Near-optimal discrete optimization for experimental design: A regret minimization approach. Mathematical Programming, 186:439–478, 2021.
- Adaptively exploiting d-separators with causal bandits. In Advances in Neural Information Processing Systems, 2022.
- Cam: Causal additive models, high-dimensional order search and penalized regression. The Annals of Statistics, 42(6):2526–2556, 2014.
- Combinatorial pure exploration of multi-armed bandits. In Advances in Neural Information Processing Systems, volume 27, 2014.
- Extended conditional independence and applications in causal inference. Annals of Statistics, 45(6):2618–2653, 2017.
- Causal bandits without prior knowledge using separating sets. In Conference on Causal Learning and Reasoning, pp. 407–427. PMLR, 2022.
- Is a good representation sufficient for sample efficient reinforcement learning? In International Conference on Learning Representations, 2020.
- Combinatorial pure exploration with full-bandit or partial linear feedback. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 7262–7270, 2021.
- Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. Journal of Machine Learning Research, 7(6):1079–1105, 2006.
- Sequential experimental design for transductive linear bandits. In Advances in Neural Information Processing Systems, volume 32, 2019.
- Multi-bandit best arm identification. In Advances in Neural Information Processing Systems, volume 24, 2011.
- Hastie, T. J. Generalized additive models. In Statistical models in S, pp. 249–307. Routledge, 2017.
- Causal inference in statistics, social, and biomedical sciences. Cambridge University Press, 2015.
- Causal bandits: Learning good interventions via causal inference. In Advances in Neural Information Processing Systems, pp. 1181–1189, 2016.
- Bandit Algorithms. Cambridge University Press, 2020.
- Structural causal bandits: where to intervene? In Advances in Neural Information Processing Systems, pp. 2568–2578, 2018.
- Regret analysis of bandit problems with causal background knowledge. In Conference on Uncertainty in Artificial Intelligence, pp. 141–150. PMLR, 2020.
- Causal bandits with unknown graph structure. In Advances in Neural Information Processing Systems, volume 34, pp. 24817–24828, 2021.
- Causal additive models with unobserved variables. In Uncertainty in Artificial Intelligence, pp. 97–106, 2021.
- A causal bandit approach to learning good atomic interventions in presence of unobserved confounders. In Uncertainty in Artificial Intelligence, pp. 1328–1338. PMLR, 2022.
- Budgeted and non-budgeted causal bandits. In International Conference on Artificial Intelligence and Statistics, pp. 2017–2025. PMLR, 2021.
- Pearl, J. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000.
- Best-arm identification in linear bandits. In Advances in Neural Information Processing Systems, volume 27, 2014.
- Best arm identification in linear bandits with linear dimension dependency. In International Conference on Machine Learning, pp. 4877–4886, 2018.
- Combinatorial pure exploration of causal bandits. In The Eleventh International Conference on Learning Representations, 2023.
- A fully adaptive algorithm for pure exploration in linear bandits. In International Conference on Artificial Intelligence and Statistics, pp. 843–851, 2018.