Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Algorithmic syntactic causal identification (2403.09580v1)

Published 14 Mar 2024 in cs.AI, cs.LG, and stat.ME

Abstract: Causal identification in causal Bayes nets (CBNs) is an important tool in causal inference allowing the derivation of interventional distributions from observational distributions where this is possible in principle. However, most existing formulations of causal identification using techniques such as d-separation and do-calculus are expressed within the mathematical language of classical probability theory on CBNs. However, there are many causal settings where probability theory and hence current causal identification techniques are inapplicable such as relational databases, dataflow programs such as hardware description languages, distributed systems and most modern machine learning algorithms. We show that this restriction can be lifted by replacing the use of classical probability theory with the alternative axiomatic foundation of symmetric monoidal categories. In this alternative axiomatization, we show how an unambiguous and clean distinction can be drawn between the general syntax of causal models and any specific semantic implementation of that causal model. This allows a purely syntactic algorithmic description of general causal identification by a translation of recent formulations of the general ID algorithm through fixing. Our description is given entirely in terms of the non-parametric ADMG structure specifying a causal model and the algebraic signature of the corresponding monoidal category, to which a sequence of manipulations is then applied so as to arrive at a modified monoidal category in which the desired, purely syntactic interventional causal model, is obtained. We use this idea to derive purely syntactic analogues of classical back-door and front-door causal adjustment, and illustrate an application to a more complex causal model.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. On Pearl’s Hierarchy and the Foundations of Causal Inference. ACM Books, 2020.
  2. D. Cakiqi and M.A. Little. Non-probabilistic Markov categories for causal modeling in machine learning. In ACT 2022: Applied Category Theory, 2022.
  3. K. Cho and B. Jacobs. Disintegration and bayesian inversion via string diagrams. Mathematical Structures in Computer Science, 29(7):938–971, March 2019. ISSN 1469-8072. 10.1017/s0960129518000488.
  4. B. Fong. Causal theories: A categorical perspective on Bayesian networks, 2013.
  5. T. Fritz. A synthetic approach to Markov kernels, conditional independence and theorems on sufficient statistics. Advances in Mathematics, 370:107239, August 2020.
  6. T. Fritz and A. Klingler. The d-separation criterion in categorical probability. Journal of Machine Learning Research, 24(46):1–49, 2023.
  7. Causal inference via string diagram surgery: A diagrammatic approach to interventions and counterfactuals. Mathematical Structures in Computer Science, 31(5):553–574, 2021.
  8. M.A. Little. Machine Learning for Signal Processing: Data Science, Algorithms, and Computational Statistics. Oxford University Press, 2019.
  9. R. Lorenz and S. Tull. Causal models in string diagrams. page arXiv:2304.07638, 2023.
  10. E. Patterson. Knowledge representation in bicategories of relations, 2017.
  11. J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, 2009.
  12. Nested Markov properties for acyclic directed mixed graphs. In UAI’12: Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, page 13. ACM, 2012.
  13. P. Sellinger. A survey of graphical languages for monoidal categories. In B. Coecke, editor, New Structures for Physics, Lecture Notes in Physics, pages 289–355. Springer, 2011.
  14. I. Shpitser. Complete identification methods for the causal hierarchy. Journal of Machine Learning Research, pages 1941–1979, 2008.
  15. Y. Yin and J. Zhang. Markov categories, causal theories, and the do-calculus. page arXiv:2204.04821, 2022.

Summary

We haven't generated a summary for this paper yet.