Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large-scale Multi-granular Concept Extraction Based on Machine Reading Comprehension (2208.14139v1)

Published 30 Aug 2022 in cs.IR

Abstract: The concepts in knowledge graphs (KGs) enable machines to understand natural language, and thus play an indispensable role in many applications. However, existing KGs have the poor coverage of concepts, especially fine-grained concepts. In order to supply existing KGs with more fine-grained and new concepts, we propose a novel concept extraction framework, namely MRC-CE, to extract large-scale multi-granular concepts from the descriptive texts of entities. Specifically, MRC-CE is built with a machine reading comprehension model based on BERT, which can extract more fine-grained concepts with a pointer network. Furthermore, a random forest and rule-based pruning are also adopted to enhance MRC-CE's precision and recall simultaneously. Our experiments evaluated upon multilingual KGs, i.e., English Probase and Chinese CN-DBpedia, justify MRC-CE's superiority over the state-of-the-art extraction models in KG completion. Particularly, after running MRC-CE for each entity in CN-DBpedia, more than 7,053,900 new concepts (instanceOf relations) are supplied into the KG. The code and datasets have been released at https://github.com/fcihraeipnusnacwh/MRC-CE

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Siyu Yuan (46 papers)
  2. Deqing Yang (55 papers)
  3. Jiaqing Liang (62 papers)
  4. Jilun Sun (1 paper)
  5. Jingyue Huang (7 papers)
  6. Kaiyan Cao (2 papers)
  7. Yanghua Xiao (151 papers)
  8. Rui Xie (59 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.