Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Cross-Domain Evaluation of Approaches for Causal Knowledge Extraction (2308.03891v1)

Published 7 Aug 2023 in cs.CL

Abstract: Causal knowledge extraction is the task of extracting relevant causes and effects from text by detecting the causal relation. Although this task is important for language understanding and knowledge discovery, recent works in this domain have largely focused on binary classification of a text segment as causal or non-causal. In this regard, we perform a thorough analysis of three sequence tagging models for causal knowledge extraction and compare it with a span based approach to causality extraction. Our experiments show that embeddings from pre-trained LLMs (e.g. BERT) provide a significant performance boost on this task compared to previous state-of-the-art models with complex architectures. We observe that span based models perform better than simple sequence tagging models based on BERT across all 4 data sets from diverse domains with different types of cause-effect phrases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Anik Saha (5 papers)
  2. Oktie Hassanzadeh (16 papers)
  3. Alex Gittens (34 papers)
  4. Jian Ni (22 papers)
  5. Kavitha Srinivas (25 papers)
  6. Bulent Yener (24 papers)
Citations (1)