Aligning Knowledge Graphs and Language Models for Factual Accuracy (2507.13411v1)
Abstract: LLMs like GPT-4, Gemini, and Claude have transformed NLP tasks such as question answering, dialogue generation, summarization, and so forth; yet their susceptibility to hallucination stands as one of the major challenges. Among numerous approaches to overcome this challenge, integration of Knowledge Graphs (KGs) into LLMs has emerged as a promising solution as it provides structured, reliable, domain-specific, and up-to-date external information to the LLMs. In this paper, we introduce ALIGNed-LLM, a simple yet effective approach to improve LLMs' factuality via a lean strategy to infuse KGs into the latent space of LLMs inspired by LLaVA where visual and textual information is infused. We use embeddings from a pre-trained Knowledge Graph Embedding (KGE) model, such as TransE, and a trainable projection layer to align entity and text embeddings. This alignment enables the LLM to distinguish between similar entities improving factual grounding and reducing hallucination. We tested our approach on three popular questions-answering benchmark datasets alongside LLMs of varying sizes, showing significant improvement. Furthermore, we applied our approach to a real-world financial use case from a large central bank in Europe, which demands high accuracy and precision, demonstrating a substantial improvement of the LLM answers.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.