Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating the Knowledge Base Completion Potential of GPT (2310.14771v1)

Published 23 Oct 2023 in cs.CL and cs.AI

Abstract: Structured knowledge bases (KBs) are an asset for search engines and other applications, but are inevitably incomplete. LLMs (LMs) have been proposed for unsupervised knowledge base completion (KBC), yet, their ability to do this at scale and with high accuracy remains an open question. Prior experimental studies mostly fall short because they only evaluate on popular subjects, or sample already existing facts from KBs. In this work, we perform a careful evaluation of GPT's potential to complete the largest public KB: Wikidata. We find that, despite their size and capabilities, models like GPT-3, ChatGPT and GPT-4 do not achieve fully convincing results on this task. Nonetheless, they provide solid improvements over earlier approaches with smaller LMs. In particular, we show that, with proper thresholding, GPT-3 enables to extend Wikidata by 27M facts at 90% precision.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Blerta Veseli (2 papers)
  2. Simon Razniewski (49 papers)
  3. Jan-Christoph Kalo (9 papers)
  4. Gerhard Weikum (75 papers)
Citations (7)