Papers
Topics
Authors
Recent
2000 character limit reached

Aligning Knowledge Graphs and Language Models for Factual Accuracy (2507.13411v1)

Published 17 Jul 2025 in cs.CL and cs.AI

Abstract: LLMs like GPT-4, Gemini, and Claude have transformed NLP tasks such as question answering, dialogue generation, summarization, and so forth; yet their susceptibility to hallucination stands as one of the major challenges. Among numerous approaches to overcome this challenge, integration of Knowledge Graphs (KGs) into LLMs has emerged as a promising solution as it provides structured, reliable, domain-specific, and up-to-date external information to the LLMs. In this paper, we introduce ALIGNed-LLM, a simple yet effective approach to improve LLMs' factuality via a lean strategy to infuse KGs into the latent space of LLMs inspired by LLaVA where visual and textual information is infused. We use embeddings from a pre-trained Knowledge Graph Embedding (KGE) model, such as TransE, and a trainable projection layer to align entity and text embeddings. This alignment enables the LLM to distinguish between similar entities improving factual grounding and reducing hallucination. We tested our approach on three popular questions-answering benchmark datasets alongside LLMs of varying sizes, showing significant improvement. Furthermore, we applied our approach to a real-world financial use case from a large central bank in Europe, which demands high accuracy and precision, demonstrating a substantial improvement of the LLM answers.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 2 likes about this paper.