2000 character limit reached
Predicting Directionality in Causal Relations in Text
Published 25 Mar 2021 in cs.CL and cs.AI | (2103.13606v1)
Abstract: In this work, we test the performance of two bidirectional transformer-based LLMs, BERT and SpanBERT, on predicting directionality in causal pairs in the textual content. Our preliminary results show that predicting direction for inter-sentence and implicit causal relations is more challenging. And, SpanBERT performs better than BERT on causal samples with longer span length. We also introduce CREST which is a framework for unifying a collection of scattered datasets of causal relations.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.