Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Zero-shot Causal Graph Extrapolation from Text via LLMs (2312.14670v1)

Published 22 Dec 2023 in cs.AI

Abstract: We evaluate the ability of LLMs to infer causal relations from natural language. Compared to traditional natural language processing and deep learning techniques, LLMs show competitive performance in a benchmark of pairwise relations without needing (explicit) training samples. This motivates us to extend our approach to extrapolating causal graphs through iterated pairwise queries. We perform a preliminary analysis on a benchmark of biomedical abstracts with ground-truth causal graphs validated by experts. The results are promising and support the adoption of LLMs for such a crucial step in causal inference, especially in medical domains, where the amount of scientific text to analyse might be huge, and the causal statements are often implicit.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. Large Language Models for Biomedical Causal Graph Construction. arXiv preprint arXiv:2301.12473.
  2. On Pearl’s hierarchy and the foundations of causal inference. In Probabilistic and causal inference: the works of judea pearl, 507–556. Association for Computing Machinery.
  3. Chickering, D. M. 2002. Optimal structure identification with greedy search. Journal of machine learning research, 3(Nov): 507–554.
  4. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. arXiv preprint arXiv:1911.10422.
  5. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE transactions on neural networks and learning systems, 33(2): 494–514.
  6. Can Large Language Models Infer Causation from Correlation? arXiv preprint arXiv:2306.05836.
  7. A survey on text classification: From traditional to deep learning. ACM Transactions on Intelligent Systems and Technology (TIST), 13(2): 1–41.
  8. Fulminant type 1 diabetes: a comprehensive review of an autoimmune condition. Diabetes/Metabolism Research and Reviews, 36(6): e3317.
  9. Large language models and knowledge graphs: Opportunities and challenges. arXiv preprint arXiv:2308.06374.
  10. Pearl, J. 2009. Causality. Cambridge university press.
  11. Rehder, B. 2017. Reasoning with causal cycles. Cognitive science, 41: 944–1002.
  12. Prompt programming for large language models: Beyond the few-shot paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 1–7.
  13. Deep learning in medical image analysis. Annual review of biomedical engineering, 19: 221–248.
  14. Towards expert-level medical question answering with large language models. arXiv preprint arXiv:2305.09617.
  15. Causation, prediction, and search. MIT press.
  16. Comparative study of CNN and RNN for natural language processing. arXiv preprint arXiv:1702.01923.
  17. Approximating counterfactual bounds while fusing observational, biased and randomised data sources. International Journal of Approximate Reasoning, 162: 109023.
  18. Understanding causality with large language models: Feasibility and opportunities. arXiv preprint arXiv:2304.05524.
Citations (10)

Summary

We haven't generated a summary for this paper yet.