Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting Event Argument Extraction: Can EAE Models Learn Better When Being Aware of Event Co-occurrences? (2306.00502v1)

Published 1 Jun 2023 in cs.CL and cs.AI

Abstract: Event co-occurrences have been proved effective for event extraction (EE) in previous studies, but have not been considered for event argument extraction (EAE) recently. In this paper, we try to fill this gap between EE research and EAE research, by highlighting the question that ``Can EAE models learn better when being aware of event co-occurrences?''. To answer this question, we reformulate EAE as a problem of table generation and extend a SOTA prompt-based EAE model into a non-autoregressive generation framework, called TabEAE, which is able to extract the arguments of multiple events in parallel. Under this framework, we experiment with 3 different training-inference schemes on 4 datasets (ACE05, RAMS, WikiEvents and MLEE) and discover that via training the model to extract all events in parallel, it can better distinguish the semantic boundary of each event and its ability to extract single event gets substantially improved. Experimental results show that our method achieves new state-of-the-art performance on the 4 datasets. Our code is avilable at https://github.com/Stardust-hyx/TabEAE.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yuxin He (38 papers)
  2. Jingyue Hu (1 paper)
  3. Buzhou Tang (18 papers)
Citations (24)

Summary

Revisiting Event Argument Extraction through Event Co-occurrence Awareness

The paper entitled "Revisiting Event Argument Extraction: Can EAE Models Learn Better When Being Aware of Event Co-occurrences?" explores a novel approach in the domain of Event Argument Extraction (EAE) by considering the significance of event co-occurrences. While traditional studies have leveraged event co-occurrences for Event Extraction (EE), this aspect has been overlooked in recent EAE models. The authors propose a new framework, TabEAE, which aims to bridge this gap by enabling efficient extraction of arguments from multiple events in parallel.

Methodology and Experiments

TabEAE is a reformulation of the EAE task as a table generation problem. It extends a state-of-the-art prompt-based EAE model into a non-autoregressive framework, thereby allowing simultaneous extraction of arguments related to multiple events. The framework consists of several key components: trigger-aware context encoding, slotted table construction, non-autoregressive table decoding, and span selection. The model inherits the efficient encoding and span selection capabilities of the prompt-based approach but enhances it with a novel table structure and decoding strategy.

The experimentations are conducted across four datasets: ACE05, RAMS, WikiEvents, and MLEE. The authors introduce three distinct training-inference schemes: Single-Single, Multi-Multi, and Multi-Single, each of which dictates whether the model is trained and/or tested on single or multiple event extractions at a time. The results demonstrate that the Multi-Single approach consistently outperforms previous methods on three of the four datasets, highlighting the framework's capability to extract overlapping and correlated events through event co-occurrence awareness. Meanwhile, the Multi-Multi scheme outshines others in datasets like MLEE, which have extensive event nesting.

Strong Numerical Results

The empirical evaluations present that TabEAE achieves new state-of-the-art results on the four analyzed benchmarks. The model outperforms contemporary methods with up to a 2.7 Arg-C F1 score improvement, proving its enhanced ability in extracting the entire semantic boundaries for co-occurring events. This indicates TabEAE's potential to address intricacies involved in distinguishing and utilizing event interrelationships effectively.

Implications and Future Directions

The proposed integration of event co-occurrences into EAE tasks has theoretical and practical implications. Theoretically, this demonstrates how relational data, such as event co-occurrences, can refine natural language processing tasks like EAE. Practically, it implies a significant performance enhancement in extracting complex event relationships from textual data, which is valuable in fields such as information retrieval and large-scale data analysis.

Future research directions may explore automatic thus reducing reliance on manually crafted prompts – further amplifying the model’s applicability across varied domains and languages. There's also room for enhancing the model's efficiency by merging co-reference resolution techniques to better handle entity relationships and narrative context in documents.

Conclusion

In conclusion, this paper contributes substantially to the domain of event argument extraction by introducing a well-founded approach to embedding event co-occurrence awareness into the learning process. TabEAE's success lays a robust groundwork for future exploration in refining LLMs for event extraction tasks, emphasizing the nuanced interplay of events within textual narratives.

Youtube Logo Streamline Icon: https://streamlinehq.com