Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking (2005.14253v1)

Published 28 May 2020 in cs.CL and cs.LG

Abstract: In this work, we present an entity linking model which combines a Transformer architecture with large scale pretraining from Wikipedia links. Our model achieves the state-of-the-art on two commonly used entity linking datasets: 96.7% on CoNLL and 94.9% on TAC-KBP. We present detailed analyses to understand what design choices are important for entity linking, including choices of negative entity candidates, Transformer architecture, and input perturbations. Lastly, we present promising results on more challenging settings such as end-to-end entity linking and entity linking without in-domain training data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Nicholas FitzGerald (15 papers)
  2. Livio Baldini Soares (18 papers)
  3. Tom Kwiatkowski (21 papers)
  4. Thibault Févry (8 papers)
Citations (27)

Summary

We haven't generated a summary for this paper yet.