Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Pre-training Corpora to Large Language Models: What Factors Influence LLM Performance in Causal Discovery Tasks? (2407.19638v1)

Published 29 Jul 2024 in cs.CL

Abstract: Recent advances in artificial intelligence have seen LLMs demonstrate notable proficiency in causal discovery tasks. This study explores the factors influencing the performance of LLMs in causal discovery tasks. Utilizing open-source LLMs, we examine how the frequency of causal relations within their pre-training corpora affects their ability to accurately respond to causal discovery queries. Our findings reveal that a higher frequency of causal mentions correlates with better model performance, suggesting that extensive exposure to causal information during training enhances the models' causal discovery capabilities. Additionally, we investigate the impact of context on the validity of causal relations. Our results indicate that LLMs might exhibit divergent predictions for identical causal relations when presented in different contexts. This paper provides the first comprehensive analysis of how different factors contribute to LLM performance in causal discovery tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tao Feng (153 papers)
  2. Lizhen Qu (68 papers)
  3. Niket Tandon (40 papers)
  4. Zhuang Li (69 papers)
  5. Xiaoxi Kang (8 papers)
  6. Gholamreza Haffari (141 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.