Papers
Topics
Authors
Recent
2000 character limit reached

Quantifying intrinsic causal contributions via structure preserving interventions (2007.00714v4)

Published 1 Jul 2020 in cs.AI, cs.IT, math.IT, and stat.ML

Abstract: We propose a notion of causal influence that describes the intrinsic' part of the contribution of a node on a target node in a DAG. By recursively writing each node as a function of the upstream noise terms, we separate the intrinsic information added by each node from the one obtained from its ancestors. To interpret the intrinsic information as a {\it causal} contribution, we considerstructure-preserving interventions' that randomize each node in a way that mimics the usual dependence on the parents and does not perturb the observed joint distribution. To get a measure that is invariant with respect to relabelling nodes we use Shapley based symmetrization and show that it reduces in the linear case to simple ANOVA after resolving the target node into noise variables. We describe our contribution analysis for variance and entropy, but contributions for other target metrics can be defined analogously. The code is available in the package gcm of the open source library DoWhy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. N. Ay and D. Polani. Information flows in causal networks. Advances in Complex Systems, 11(1):17–41, 2008.
  2. K. Bache and M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml.
  3. Dowhy-gcm: An extension of dowhy for causal inference in graphical causal models. arXiv:2206.06821, 2022.
  4. Foundations of structural causal models with cycles and latent variables. Annals of Statistics, 49(5):2885–2915, 2021.
  5. Causal structure-based root cause analysis of outliers. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 2357–2369. PMLR, 17–23 Jul 2022.
  6. J. Correa and E. Bareinboim. A calculus for stochastic interventions:causal effect identification and surrogate experiments. Proceedings of the AAAI Conference on Artificial Intelligence, 34(06):10093–10100, Apr. 2020.
  7. T. Cover and J. Thomas. Elements of Information Theory. Wileys Series in Telecommunications, New York, 1991.
  8. E. Daniel and K. Murphy. Exact bayesian structure learning from uncertain interventions. In M. M. and X. Shen, editors, Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, volume 2 of Proceedings of Machine Learning Research, pages 107–114, San Juan, Puerto Rico, 2007. PMLR.
  9. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE Symposium on Security and Privacy (SP), pages 598–617, 2016.
  10. F. Eberhardt and R. Scheines. Interventions and causal inference. Philosophy of Science, 74(5):981–995, 2007.
  11. Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
  12. S. Greenland and J. Robins. Identifiability, exchangeability, and epidemiological confounding. Int. Journal for Epidemiology, 15(3):413–9, 1986.
  13. J. Halpern and C. Hitchcock. Graded causation and defaults. The British Journal for the Philosophy of Science, 66:413–457, 2013.
  14. Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models. preprint arXiv:2011.01625, 2020.
  15. C. Hitchcock and J. Knobe. Cause and norm. The Journal of Philosophy, 106:587–612, 2009.
  16. Quantifying causal influences. Annals of Statistics, 41(5):2324–2358, 2013.
  17. Feature relevance quantification in explainable ai: A causal problem. In S. Chiappa and R. Calandra, editors, Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pages 2907–2916, Online, 26–28 Aug 2020. PMLR.
  18. d⁢o𝑑𝑜doitalic_d italic_o-shapley: Towards causal interpretation of model prediction. In Proceedings of 39. International Conference on Machine Learning (ICML). 2022.
  19. Y. Kano and S. Shimizu. Causal inference using nonnormality. In Proceedings of the International Symposium on Science of Modeling, the 30th Anniversary of the Information Criterion, pages 261–270, Tokyo, Japan, 2003.
  20. Varieties of causal intervention. In C. Zhang, G. H., and W. Yeap, editors, Trends in Artificial Intelligence, volume 3157 of Lecture Notes in Computer Science. Springer, 2004.
  21. The high heritability of educational achievement reflects many genetically influenced traits, not just intelligence. Proceedings of the National Academy of Sciences, 111(42):15273–15278, 2014.
  22. R. Lewontin. Annotation: the analysis of variance and the analysis of causes. American Journal Human Genetics, 26(3):400–411, 1974.
  23. S. Lundberg and S. Lee. A unified approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4765–4774. Curran Associates, Inc., 2017.
  24. Probabilistic soft interventions in conditional gaussian networks. In Proceedings of the conference Artificial Intelligence and Statistics (AISTATS), PMLR R5:214-221, 2005.
  25. Sampling permutations for shapley value estimation. Journal of Machine Learning Research, 23(43):1–46, 2022.
  26. Distinguishing cause from effect using observational data: methods and benchmarks. Journal of Machine Learning Research, 17(32):1–102, 2016.
  27. R. Northcott. Can ANOVA measure causal strength? The Quaterly Review of Biology, 83(1):47–55, 2008.
  28. Deep structural causal models for tractable counterfactual inference. In Proceedings of Neural Information Processing Systems (NeurIPS), 2020.
  29. J. Pearl. Causality. Cambridge University Press, 2000.
  30. J. Pearl. Direct and indirect effects. In Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI), pages 411–420, San Francisco, CA, 2001. Morgan Kaufmann.
  31. J. Pearl. Interpretation and identification of causal mediation. Psychological methods, 19(4):459—481, 2014.
  32. J. Pearl and J. Mackenzie. The book of why. Basic Books, USA, 2018.
  33. Causal discovery with continuous additive noise models. Journal of Machine Learning Research, 15:2009–2053, 2014.
  34. Elements of Causal Inference – Foundations and Learning Algorithms. MIT Press, 2017.
  35. S. P. R. Rose. Commentary: heritability estimates–long past their sell-by date. International journal of epidemiology, 35 3:525–7, 2006.
  36. Causal consistency of structural equation models. In Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence (UAI 2017), 2017.
  37. Direct and indirect effects—an information theoretic perspective. Entropy, 22(8), 2020.
  38. L. Shapley. A value for n-person games. Contributions to the Theory of Games (AM-28), 2, 1953.
  39. I. Sobol. Global sensitivity indices for nonlinear mathematical models and their monte carlo estimates. Mathematics and Computers in Simulation, 55(1):271 – 280, 2001. The Second IMACS Seminar on Monte Carlo Methods.
  40. Causation, Prediction, and Search. Springer-Verlag, New York, NY, 1993.
  41. Learning structural equation models for fmri. In NeurIPS 2006, 2006.
  42. J. Tian and J. Pearl. Causal discovery from changes. In Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, UAI’01, page 512–521. Morgan Kaufmann Publishers Inc., 2001.
  43. Backtracking counterfactuals. In M. van der Schaar, C. Zhang, and D. Janzing, editors, Proceedings of the Second Conference on Causal Learning and Reasoning, volume 213 of Proceedings of Machine Learning Research, pages 177–196. PMLR, 11–14 Apr 2023.
  44. J. Wang and K. Mueller. Visual causality analysis made practical. In IEEE Conference on Visual Analytics Science and Technology (VAST), pages 151–161, 2017.
  45. Shapley flow: A graph-based approach to interpreting model predictions. In A. Banerjee and K. Fukumizu, editors, The 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event, volume 130 of Proceedings of Machine Learning Research, pages 721–729. PMLR, 2021.
  46. K. Zhang and A. Hyvärinen. On the identifiability of the post-nonlinear causal model. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, Montreal, Canada, 2009.
Citations (8)

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.