Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EA$^2$E: Improving Consistency with Event Awareness for Document-Level Argument Extraction (2205.14847v1)

Published 30 May 2022 in cs.CL

Abstract: Events are inter-related in documents. Motivated by the one-sense-per-discourse theory, we hypothesize that a participant tends to play consistent roles across multiple events in the same document. However recent work on document-level event argument extraction models each individual event in isolation and therefore causes inconsistency among extracted arguments across events, which will further cause discrepancy for downstream applications such as event knowledge base population, question answering, and hypothesis generation. In this work, we formulate event argument consistency as the constraints from event-event relations under the document-level setting. To improve consistency we introduce the Event-Aware Argument Extraction (EA$2$E) model with augmented context for training and inference. Experiment results on WIKIEVENTS and ACE2005 datasets demonstrate the effectiveness of EA$2$E compared to baseline methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Qi Zeng (42 papers)
  2. Qiusi Zhan (9 papers)
  3. Heng Ji (266 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.