Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Generalization in Coreference Resolution (2109.09667v1)

Published 20 Sep 2021 in cs.CL

Abstract: While coreference resolution is defined independently of dataset domain, most models for performing coreference resolution do not transfer well to unseen domains. We consolidate a set of 8 coreference resolution datasets targeting different domains to evaluate the off-the-shelf performance of models. We then mix three datasets for training; even though their domain, annotation guidelines, and metadata differ, we propose a method for jointly training a single model on this heterogeneous data mixture by using data augmentation to account for annotation differences and sampling to balance the data quantities. We find that in a zero-shot setting, models trained on a single dataset transfer poorly while joint training yields improved overall performance, leading to better generalization in coreference resolution models. This work contributes a new benchmark for robust coreference resolution and multiple new state-of-the-art results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Shubham Toshniwal (25 papers)
  2. Patrick Xia (26 papers)
  3. Sam Wiseman (30 papers)
  4. Karen Livescu (89 papers)
  5. Kevin Gimpel (72 papers)
Citations (36)

Summary

We haven't generated a summary for this paper yet.