Emergent Mind


Large Language Models (LLMs) exhibit exceptional abilities for causal analysis between concepts in numerous societally impactful domains, including medicine, science, and law. Recent research on LLM performance in various causal discovery and inference tasks has given rise to a new ladder in the classical three-stage framework of causality. In this paper, we advance the current research of LLM-driven causal discovery by proposing a novel framework that combines knowledge-based LLM causal analysis with data-driven causal structure learning. To make LLM more than a query tool and to leverage its power in discovering natural and new laws of causality, we integrate the valuable LLM expertise on existing causal mechanisms into statistical analysis of objective data to build a novel and practical baseline for causal structure learning. We introduce a universal set of prompts designed to extract causal graphs from given variables and assess the influence of LLM prior causality on recovering causal structures from data. We demonstrate the significant enhancement of LLM expertise on the quality of recovered causal structures from data, while also identifying critical challenges and issues, along with potential approaches to address them. As a pioneering study, this paper aims to emphasize the new frontier that LLMs are opening for classical causal discovery and inference, and to encourage the widespread adoption of LLM capabilities in data-driven causal analysis.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a detailed summary of this paper with a premium account.

We ran into a problem analyzing this paper.

Please try again later (sorry!).

Get summaries of trending AI papers delivered straight to your inbox

Unsubscribe anytime.

  1. 2022. Quality evaluation of triples in knowledge graph by incorporating internal with external consistency. IEEE Transactions on Neural Networks and Learning Systems.
  2. 2020. Language models are few-shot learners. Advances in neural information processing systems 33:1877–1901.
  3. 2016. Learning bayesian networks with ancestral constraints. Advances in Neural Information Processing Systems 29.
  4. Mitigating Prior Errors in Causal Structure Learning: Towards LLM driven Prior Knowledge
  5. 2004. Large-sample learning of bayesian networks is np-hard. Journal of Machine Learning Research 5:1287–1330.
  6. 2023. The impact of prior knowledge on causal structure learning. Knowledge and Information Systems 1–50.
  7. 2004. Discovery of meaningful associations in genomic data using partial correlation coefficients. Bioinformatics 20(18):3565–3574.
  8. CRASS: A Novel Data Set and Benchmark to Test Counterfactual Reasoning of Large Language Models
  9. Thinking Fast and Slow in Large Language Models
  10. 1995. Learning bayesian networks: a unification for discrete and gaussian domains. In Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence, 274–284.
  11. 2008. Nonlinear causal discovery with additive noise models. Advances in neural information processing systems 21.
  12. Can Large Language Models Infer Causation from Correlation?
  13. Causal Reasoning and Large Language Models: Opening a New Frontier for Causality
  14. 2018. Bayesian network structure learning with side constraints. In International Conference on Probabilistic Graphical Models, 225–236. PMLR.
  15. LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models
  16. SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks
  17. Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4
  18. Can large language models build causal graphs?
  19. Marcus, G. 2022. How come gpt can seem so brilliant one minute and so breathtakingly dumb the next? Substack newsletter. The Road to AI We Can Trust.
  20. Capabilities of GPT-4 on Medical Challenge Problems
  21. 2006. Causal discovery with prior information. In AI 2006: Advances in Artificial Intelligence: 19th Australian Joint Conference on Artificial Intelligence, Hobart, Australia, December 4-8, 2006. Proceedings 19, 1162–1167. Springer.
  22. Instruction Tuning with GPT-4
  23. 2000. Causation, prediction, and search. MIT press.
  24. Ordering-Based Search: A Simple and Effective Algorithm for Learning Bayesian Networks
  25. 2006. The Max-Min Hill-Climbing Bayesian network structure learning algorithm. Machine Learning 65:31–78.
  26. Causal-Discovery Performance of ChatGPT in the context of Neuropathic Pain Diagnosis
  27. 2021. Knowledge graph quality control: A survey. Fundamental Research 1(5):607–626.
  28. 2022a. Knowledge verification from data. IEEE Transactions on Neural Networks and Learning Systems 1–15.
  29. 2022b. Generalization bounds for estimating causal effects of continuous treatments. In Advances in Neural Information Processing Systems, volume 35, 8605–8617.
  30. Can Foundation Models Talk Causality?
  31. Customizing General-Purpose Foundation Models for Medical Report Generation
  32. Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline

Show All 32

Test Your Knowledge

You answered out of questions correctly.

Well done!