Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code (2305.19213v1)

Published 30 May 2023 in cs.CL

Abstract: Causal reasoning, the ability to identify cause-and-effect relationship, is crucial in human thinking. Although LLMs succeed in many NLP tasks, it is still challenging for them to conduct complex causal reasoning like abductive reasoning and counterfactual reasoning. Given the fact that programming code may express causal relations more often and explicitly with conditional statements like if, we want to explore whether Code-LLMs acquire better causal reasoning abilities. Our experiments show that compared to text-only LLMs, Code-LLMs with code prompts are significantly better in causal reasoning. We further intervene on the prompts from different aspects, and discover that the programming structure is crucial in code prompt design, while Code-LLMs are robust towards format perturbations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xiao Liu (402 papers)
  2. Da Yin (35 papers)
  3. Chen Zhang (403 papers)
  4. Yansong Feng (81 papers)
  5. Dongyan Zhao (144 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.