2000 character limit reached
Evaluating the Impact of a Hierarchical Discourse Representation on Entity Coreference Resolution Performance
Published 20 Apr 2021 in cs.CL, cs.AI, and cs.LG | (2104.10215v1)
Abstract: Recent work on entity coreference resolution (CR) follows current trends in Deep Learning applied to embeddings and relatively simple task-related features. SOTA models do not make use of hierarchical representations of discourse structure. In this work, we leverage automatically constructed discourse parse trees within a neural approach and demonstrate a significant improvement on two benchmark entity coreference-resolution datasets. We explore how the impact varies depending upon the type of mention.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.