Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Expanding the Vocabulary of BERT for Knowledge Base Construction (2310.08291v1)

Published 12 Oct 2023 in cs.CL and cs.AI

Abstract: Knowledge base construction entails acquiring structured information to create a knowledge base of factual and relational data, facilitating question answering, information retrieval, and semantic understanding. The challenge called "Knowledge Base Construction from Pretrained LLMs" at International Semantic Web Conference 2023 defines tasks focused on constructing knowledge base using LLM. Our focus was on Track 1 of the challenge, where the parameters are constrained to a maximum of 1 billion, and the inclusion of entity descriptions within the prompt is prohibited. Although the masked LLM offers sufficient flexibility to extend its vocabulary, it is not inherently designed for multi-token prediction. To address this, we present Vocabulary Expandable BERT for knowledge base construction, which expand the LLM's vocabulary while preserving semantic embeddings for newly added words. We adopt task-specific re-pre-training on masked LLM to further enhance the LLM. Through experimentation, the results show the effectiveness of our approaches. Our framework achieves F1 score of 0.323 on the hidden test set and 0.362 on the validation set, both data set is provided by the challenge. Notably, our framework adopts a lightweight LLM (BERT-base, 0.13 billion parameters) and surpasses the model using prompts directly on LLM (Chatgpt-3, 175 billion parameters). Besides, Token-Recode achieves comparable performances as Re-pretrain. This research advances language understanding models by enabling the direct embedding of multi-token entities, signifying a substantial step forward in link prediction task in knowledge graph and metadata completion in data management.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Dong Yang (163 papers)
  2. Xu Wang (319 papers)
  3. Remzi Celebi (4 papers)
Citations (1)