Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Knowledge Infusion via KG-LLM Alignment (2406.03746v1)

Published 6 Jun 2024 in cs.CL and cs.AI

Abstract: To tackle the problem of domain-specific knowledge scarcity within LLMs, knowledge graph-retrievalaugmented method has been proven to be an effective and efficient technique for knowledge infusion. However, existing approaches face two primary challenges: knowledge mismatch between public available knowledge graphs and the specific domain of the task at hand, and poor information compliance of LLMs with knowledge graphs. In this paper, we leverage a small set of labeled samples and a large-scale corpus to efficiently construct domain-specific knowledge graphs by an LLM, addressing the issue of knowledge mismatch. Additionally, we propose a three-stage KG-LLM alignment strategyto enhance the LLM's capability to utilize information from knowledge graphs. We conduct experiments with a limited-sample setting on two biomedical question-answering datasets, and the results demonstrate that our approach outperforms existing baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhouyu Jiang (4 papers)
  2. Ling Zhong (8 papers)
  3. Mengshu Sun (41 papers)
  4. Jun Xu (397 papers)
  5. Rui Sun (105 papers)
  6. Hui Cai (10 papers)
  7. Shuhan Luo (1 paper)
  8. Zhiqiang Zhang (129 papers)
Citations (6)