Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction (2106.09232v1)

Published 17 Jun 2021 in cs.CL

Abstract: Event extraction is challenging due to the complex structure of event records and the semantic gap between text and event. Traditional methods usually extract event records by decomposing the complex structure prediction task into multiple subtasks. In this paper, we propose Text2Event, a sequence-to-structure generation paradigm that can directly extract events from the text in an end-to-end manner. Specifically, we design a sequence-to-structure network for unified event extraction, a constrained decoding algorithm for event knowledge injection during inference, and a curriculum learning algorithm for efficient model learning. Experimental results show that, by uniformly modeling all tasks in a single model and universally predicting different labels, our method can achieve competitive performance using only record-level annotations in both supervised learning and transfer learning settings.

Citations (254)

Summary

  • The paper introduces a novel sequence-to-structure model that unifies trigger detection and argument labeling for event extraction.
  • It employs a constrained decoding algorithm and curriculum learning to align with predefined schemas and improve accuracy.
  • Experimental results on ACE05-EN and ERE-EN datasets demonstrate competitive F1-scores and effective transfer learning capabilities.

Text2Event: Controllable Sequence-to-Structure Generation for End-to-End Event Extraction

The paper presents a novel approach to the complex task of event extraction using a sequence-to-structure generation paradigm, called Text2Event, to transform text into event records efficiently. Traditional methods typically decompose the task into subtasks such as detecting triggers and identifying arguments. Text2Event aims to alleviate these complexities by utilizing a unified model that handles event extraction end-to-end.

Methodology

Text2Event employs a sequence-to-structure network, a constrained decoding algorithm for real-time event knowledge injection, and a curriculum learning strategy for model training:

  1. Sequence-to-Structure Network: The authors leverage a neural network to concurrently process triggers, arguments, and their associated labels. This network does not require fine-grained token-level annotations, which enhances its efficiency.
  2. Constrained Decoding Algorithm: This component ensures that event extraction aligns with pre-defined schemas, injecting domain-specific knowledge into the model's inference phase, thus improving accuracy.
  3. Curriculum Learning: To facilitate model training, a staged approach is employed. The model is initially trained on simplified substructure tasks and progressively exposed to more complex full-structure tasks.

Experimental Results

The authors conducted extensive experiments on multiple datasets including ACE05-EN and ERE-EN. Text2Event demonstrated competitive results:

  • On ACE05-EN, Text2Event, without fine-grained annotations, achieved results rivaling those of state-of-the-art models that employ hierarchical and entity-based annotations.
  • The transfer learning abilities of Text2Event were tested by pre-training on a subset of events and fine-tuning on new events, resulting in F1-score improvements.

Implications and Future Work

Text2Event introduces a new paradigm in the field of event extraction by modeling tasks in a simplified and unified framework. This approach not only enhances data efficiency but also encourages knowledge transfer across different event types. The implications for AI research are significant, as the sequence-to-structure approach could be adapted to other information extraction tasks such as N-ary relation extraction. Future developments might focus on refining the sequence-to-structure model and applying it to more diverse datasets and tasks.

In summary, Text2Event represents a sophisticated integration of neural generation models and event extraction requirements, offering a promising direction for research focused on natural language understanding.