Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bootleg: Chasing the Tail with Self-Supervised Named Entity Disambiguation (2010.10363v3)

Published 20 Oct 2020 in cs.CL, cs.AI, and cs.LG

Abstract: A challenge for named entity disambiguation (NED), the task of mapping textual mentions to entities in a knowledge base, is how to disambiguate entities that appear rarely in the training data, termed tail entities. Humans use subtle reasoning patterns based on knowledge of entity facts, relations, and types to disambiguate unfamiliar entities. Inspired by these patterns, we introduce Bootleg, a self-supervised NED system that is explicitly grounded in reasoning patterns for disambiguation. We define core reasoning patterns for disambiguation, create a learning procedure to encourage the self-supervised model to learn the patterns, and show how to use weak supervision to enhance the signals in the training data. Encoding the reasoning patterns in a simple Transformer architecture, Bootleg meets or exceeds state-of-the-art on three NED benchmarks. We further show that the learned representations from Bootleg successfully transfer to other non-disambiguation tasks that require entity-based knowledge: we set a new state-of-the-art in the popular TACRED relation extraction task by 1.0 F1 points and demonstrate up to 8% performance lift in highly optimized production search and assistant tasks at a major technology company

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Laurel Orr (11 papers)
  2. Megan Leszczynski (10 papers)
  3. Simran Arora (64 papers)
  4. Sen Wu (19 papers)
  5. Neel Guha (23 papers)
  6. Xiao Ling (22 papers)
  7. Christopher Re (23 papers)
Citations (47)

Summary

We haven't generated a summary for this paper yet.