Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Document-Level Event Argument Extraction by Conditional Generation (2104.05919v1)

Published 13 Apr 2021 in cs.CL

Abstract: Event extraction has long been treated as a sentence-level task in the IE community. We argue that this setting does not match human information-seeking behavior and leads to incomplete and uninformative extraction results. We propose a document-level neural event argument extraction model by formulating the task as conditional generation following event templates. We also compile a new document-level event extraction benchmark dataset WikiEvents which includes complete event and coreference annotation. On the task of argument extraction, we achieve an absolute gain of 7.6% F1 and 5.7% F1 over the next best model on the RAMS and WikiEvents datasets respectively. On the more challenging task of informative argument extraction, which requires implicit coreference reasoning, we achieve a 9.3% F1 gain over the best baseline. To demonstrate the portability of our model, we also create the first end-to-end zero-shot event extraction framework and achieve 97% of fully supervised model's trigger extraction performance and 82% of the argument extraction performance given only access to 10 out of the 33 types on ACE.

Document-Level Event Argument Extraction by Conditional Generation

The work presented in "Document-Level Event Argument Extraction by Conditional Generation" tackles the challenge of event extraction at the document level rather than limiting the task to individual sentences, a common convention in the information extraction (IE) community. The authors propose a novel approach, employing a neural model that frames the event argument extraction task as conditional generation guided by predefined event templates. This methodology serves to align better with human information-seeking behavior, which naturally extends across sentence boundaries.

Overview of the Approach

The paper introduces an end-to-end model that performs document-level event argument extraction, diversifying from traditional sentence-bound techniques. The model generates text by filling event templates with argument data extracted from documents. This generative approach allows the model to bypass preprocessing steps like entity recognition and coreference resolution often required by traditional models. A significant stride made by the authors is the creation and use of a new benchmark dataset, WikiEvents, which contains comprehensive event and coreference annotations, providing a robust platform for evaluating document-level challenges.

Empirical Results

The proposed model achieves substantial improvements in performance, as evidenced by strong numerical results documented in the paper. Precisely, the model demonstrated an absolute gain of 7.6% F1 on the RAMS dataset and 5.7% F1 on WikiEvents over the best existing models. These improvements underscore the efficacy of a document-level approach compared to sentence-bound methodologies.

Moreover, in handling the more complex task of informative argument extraction requiring implicit coreference reasoning, an impressive 9.3% F1 improvement was achieved over baseline models. This task reflects more realistic scenarios where identifying the most informative mention of an argument across an entire document is essential.

The paper also presents the model's adaptability, introducing the first end-to-end zero-shot event extraction framework. The framework achieves 97% of a fully supervised model's performance for trigger extraction and 82% for argument extraction using only access to a portion of the event types.

Implications and Future Directions

The implications of this research straddle both theory and practical applications. On a theoretical level, it challenges the traditional scope of event extraction, pushing for more holistic document-level understanding, which could transform how large unstructured datasets are processed. Practically, the capability to perform informative argument extraction promises enhancements in applications such as automatic summarization, knowledge base population, and intelligent search engines.

Future developments could further explore integrating common sense or ontological constraints into the model, enriching the extraction quality by leveraging broader contextual knowledge. This could potentially refine results where entities or roles have domain-specific definitions that the model must learn. Additionally, the promising zero-shot results suggest that expanding support for new event types could drastically improve applications where annotated data is scarce.

Overall, this paper highlights an evolutionary step in the field of event extraction by advocating for document-level considerations and employing innovative neural models that promise to reshape the boundaries of current extraction systems. The findings and methodologies proposed by Li, Ji, and Han lay a foundational framework for further exploration in large-scale, document-driven event understanding tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Sha Li (42 papers)
  2. Heng Ji (266 papers)
  3. Jiawei Han (263 papers)
Citations (268)