Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 76 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 80 tok/s Pro
Kimi K2 210 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4.5 33 tok/s Pro
2000 character limit reached

Diffusion Model in Causal Inference with Unmeasured Confounders (2308.03669v4)

Published 7 Aug 2023 in cs.LG, cs.AI, and stat.ML

Abstract: We study how to extend the use of the diffusion model to answer the causal question from the observational data under the existence of unmeasured confounders. In Pearl's framework of using a Directed Acyclic Graph (DAG) to capture the causal intervention, a Diffusion-based Causal Model (DCM) was proposed incorporating the diffusion model to answer the causal questions more accurately, assuming that all of the confounders are observed. However, unmeasured confounders in practice exist, which hinders DCM from being applicable. To alleviate this limitation of DCM, we propose an extended model called Backdoor Criterion based DCM (BDCM), whose idea is rooted in the Backdoor criterion to find the variables in DAG to be included in the decoding process of the diffusion model so that we can extend DCM to the case with unmeasured confounders. Synthetic data experiment demonstrates that our proposed model captures the counterfactual distribution more precisely than DCM under the unmeasured confounders.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. Cam: Causal additive models, high-dimensional order search and penalized regression. Ann. Statist., 2014.
  2. Interventional and counterfactual inference with diffusion models. arXiv preprint arXiv:2302.00860, 2023.
  3. Chickering, D. M. Learning bayesian networks is np-complete. Learning from data: Artificial intelligence and statistics V, pp.  121–130, 1996.
  4. Review of causal discovery methods based on graphical models. Frontiers in Genetics, 10, 2019. ISSN 1664-8021. doi: 10.3389/fgene.2019.00524.
  5. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012.
  6. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
  7. Nonlinear causal discovery with additive noise models. Advances in neural information processing systems, 21, 2008.
  8. Causal inference in statistics, social, and biomedical sciences. Cambridge University Press, 2015.
  9. Causal autoregressive flows. In International conference on artificial intelligence and statistics, pp.  3520–3528. PMLR, 2021.
  10. Diffwave: A versatile diffusion model for audio synthesis. arXiv preprint arXiv:2009.09761, 2020.
  11. Luo, C. Understanding diffusion models: A unified perspective. arXiv preprint arXiv:2208.11970, 2022.
  12. Causal inference in statistics: A primer. John Wiley & Sons, 2016.
  13. Causal discovery with continuous additive noise models. Journal of Machine Learning Research, 2014.
  14. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1(2):3, 2022.
  15. Score matching enables causal discovery of nonlinear additive noise models. In International Conference on Machine Learning, pp.  18741–18753. PMLR, 2022.
  16. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35:36479–36494, 2022.
  17. Diffusion models for causal discovery via topological ordering. arXiv preprint arXiv:2210.06201, 2022a.
  18. Causal machine learning for healthcare and precision medicine. Royal Society Open Science, 9(8):220638, 2022b.
  19. Vaca: Design of variational graph autoencoders for interventional and counterfactual queries. arXiv preprint arXiv:2110.14690, 2021.
  20. A linear non-gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7(10), 2006.
  21. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pp.  2256–2265. PMLR, 2015.
  22. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020.
  23. Causation, prediction, and search. MIT press, 2000.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub