Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

WikiDataSets: Standardized sub-graphs from Wikidata (1906.04536v3)

Published 11 Jun 2019 in cs.LG, cs.AI, cs.SI, and stat.ML

Abstract: Developing new ideas and algorithms in the fields of graph processing and relational learning requires public datasets. While Wikidata is the largest open source knowledge graph, involving more than fifty million entities, it is larger than needed in many cases and even too large to be processed easily. Still, it is a goldmine of relevant facts and relations. Using this knowledge graph is time consuming and prone to task specific tuning which can affect reproducibility of results. Providing a unified framework to extract topic-specific subgraphs solves this problem and allows researchers to evaluate algorithms on common datasets. This paper presents various topic-specific subgraphs of Wikidata along with the generic Python code used to extract them. These datasets can help develop new methods of knowledge graph processing and relational learning.

Citations (7)

Summary

We haven't generated a summary for this paper yet.