Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 177 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

GraphMERT: Efficient and Scalable Distillation of Reliable Knowledge Graphs from Unstructured Data (2510.09580v1)

Published 10 Oct 2025 in cs.AI and cs.CL

Abstract: Researchers have pursued neurosymbolic AI applications for nearly three decades because symbolic components provide abstraction while neural components provide generalization. Thus, a marriage of the two components can lead to rapid advancements in AI. Yet, the field has not realized this promise since most neurosymbolic AI frameworks fail to scale. In addition, the implicit representations and approximate reasoning of neural approaches limit interpretability and trust. Knowledge graphs (KGs), a gold-standard representation of explicit semantic knowledge, can address the symbolic side. However, automatically deriving reliable KGs from text corpora has remained an open problem. We address these challenges by introducing GraphMERT, a tiny graphical encoder-only model that distills high-quality KGs from unstructured text corpora and its own internal representations. GraphMERT and its equivalent KG form a modular neurosymbolic stack: neural learning of abstractions; symbolic KGs for verifiable reasoning. GraphMERT + KG is the first efficient and scalable neurosymbolic model to achieve state-of-the-art benchmark accuracy along with superior symbolic representations relative to baselines. Concretely, we target reliable domain-specific KGs that are both (1) factual (with provenance) and (2) valid (ontology-consistent relations with domain-appropriate semantics). When a LLM, e.g., Qwen3-32B, generates domain-specific KGs, it falls short on reliability due to prompt sensitivity, shallow domain expertise, and hallucinated relations. On text obtained from PubMed papers on diabetes, our 80M-parameter GraphMERT yields a KG with a 69.8% FActScore; a 32B-parameter baseline LLM yields a KG that achieves only 40.2% FActScore. The GraphMERT KG also attains a higher ValidityScore of 68.8%, versus 43.0% for the LLM baseline.

Summary

  • The paper introduces GraphMERT, an encoder-based framework that efficiently extracts reliable and ontology-consistent knowledge graphs from unstructured text.
  • The methodology integrates Hierarchical Graph Attention Networks with a transformer architecture to fuse syntactic and semantic information for robust dual-space learning.
  • The paper demonstrates significant performance gains on medical datasets, achieving 69.8% FActScore and 68.8% ValidityScore, outperforming traditional LLM approaches.

GraphMERT: Efficient and Scalable Distillation of Reliable Knowledge Graphs from Unstructured Data

Introduction

The paper presents GraphMERT, a novel framework designed to efficiently and scalably distill reliable Knowledge Graphs (KGs) from unstructured data. The framework aims to overcome the challenges faced by conventional neurosymbolic AI systems, which struggle to scale and maintain interpretability, by effectively integrating neural and symbolic approaches to create a verifiable, reliable domain-specific KG.

Overview of GraphMERT Framework

GraphMERT leverages a small graphical encoder-only model (80M parameters) to translate its internal representations and high-quality text corpora into a structured KG. The system targets domain-specific KGs that are factual, ontology-consistent, and suitable for high-stakes environments, such as medicine. Traditional LLMs generate domain KGs that are unreliable due to prompt sensitivity and limited domain specificity. In contrast, GraphMERT demonstrates high factuality and validity with significantly superior scores compared to its LLM counterparts.

GraphMERT Architecture

GraphMERT is built on an encoder-only transformer architecture, which is enhanced by Hierarchical Graph Attention Networks (H-GAT). This architecture allows GraphMERT to incorporate both semantic and syntactic information effectively. Key components of the architecture include:

  • Leafy Chain Graphs: These graphs unify syntactic contexts from text corpora with semantic examples from a seed KG, enabling dual-space learning.
  • H-GAT: This mechanism fuses relation embeddings into semantic graph nodes, enhancing the transformer with graph-based contextual information.
  • Graph Attention: Modifies attention weights with an exponential decay function, encoding spatial distances between graph nodes to ensure that close nodes exert more influence during learning.

Training and Dual Loss Objectives

GraphMERT's training involves two primary loss objectives:

  1. Masked Language Modeling (MLM): Trains GraphMERT on syntactic text corpora by predicting masked text portions.
  2. Masked Node Modeling (MNM): Focuses on semantic leaves, training relation embeddings through masked tail prediction within the chain graphs.

These objectives ensure that GraphMERT learns robust cross-modal representations, capturing knowledge that simultaneously reflects syntactic context and semantic meaning.

Seed KG Creation and Injection

A critical component of the GraphMERT pipeline is the creation and injection of a seed KG, built from high-quality domain-specific sources. Key steps include:

  • Entity Linking: Processes to match text entities with a domain ontology, ensuring genuine relevance to the target domain.
  • Similarity Matching: Chooses triples from a seed KG based on their semantic similarity with the text corpus, aided by efficient embedding-based ranking.
  • Injection Algorithm: Ensures that each head entity in the text corpus is associated with a semantically meaningful triple, maximizing predictive relevance and diversity.

KG Extraction and LLM Assistance

For KG extraction, GraphMERT predicts masked tail tokens from text sequences conditioned on their context. This prediction benefits from a helper LLM in combining token predictions into coherent prompts, ensuring contextual and semantic congruence in assembled triples. Figure 1

Figure 1: Overview of the GraphMERT framework. It illustrates the training and semantic augmentation processes involving both chain graphs and LLM collaboration.

Evaluation and Results

GraphMERT's efficacy is evaluated against a baseline LLM KG on medical datasets, showing notable improvements in reliability and accuracy:

  • FActScore and ValidityScore: GraphMERT achieves a 69.8% FActScore and a 68.8% ValidityScore, outperforming LLM baseline scores of 40.2% and 43.0% respectively.
  • GraphRAG Evaluation: Demonstrates superior performance in tasks requiring multi-hop reasoning and sustained context understanding, successfully answering domain-specific questions.
  • Domain Flexibility: Despite potential constraints, GraphMERT adapts well to sparse high-quality training data, emphasizing the importance of data quality over sheer volume.

Conclusion

GraphMERT offers a robust and scalable approach to distilling KGs from unstructured data. With its advanced integration of syntactic and semantic learning, the framework not only overcomes limitations of traditional neurosymbolic systems but also enhances interpretability and reliability. Future work will focus on extending GraphMERT's applicability to other domains and improving multi-token span prediction capabilities. These advancements promise to bridge the gap between neural and symbolic reasoning, laying a path toward domain-specific superintelligence.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Knowledge Gaps

Knowledge gaps, limitations, and open questions

Below is a single consolidated list of concrete gaps and open questions left unresolved by the paper that future work could address:

  • Dependence on seed KG quality and coverage: How sensitive is performance to the size, completeness, and biases of the seed KG (e.g., “100+ triples per relation”)? What strategies reliably bootstrap in domains with no mature ontology or sparse seed triples?
  • Ontology evolution and schema induction: Can the method discover novel relation types, attributes, and qualifiers not present in the seed KG, and how are such candidates validated, incorporated, or rejected?
  • Entity recognition and linking pipeline clarity: The paper does not specify how entity spans are detected, normalized, and linked across documents to ontologies (UMLS/SNOMED/etc.). What are the NER/coreference/linking steps, error modes, and their contribution to end-to-end errors?
  • Handling n-ary, qualified, and contextual relations: The approach appears limited to binary triples. How are temporal qualifiers, provenance qualifiers, negation, speculation/hedging (e.g., “may be associated with”), dosage, and context-specific constraints represented and extracted?
  • Conflict resolution and uncertainty: How does the system reconcile conflicting statements across sources, quantify uncertainty at the triple level, and propagate confidence through the KG (e.g., edge weights, calibrated scores, contradiction tracking)?
  • Provenance granularity and aggregation: Sentence-level provenance is claimed, but how are multi-sentence/multi-document supports aggregated, ranked, and surfaced for auditing? How are duplicated or near-duplicate facts merged with their provenance histories?
  • Global vs. local extraction tension: Triple extraction is described as sentence-level; how is “global integration” quantitatively ensured beyond local co-occurrence (e.g., multi-hop evidence aggregation, cross-document corroboration thresholds)?
  • Data-quality robustness: The method assumes high-quality text (∼100M tokens). How does performance degrade under realistic noise, domain shift, or mixed-quality corpora, and what data-cleaning/selection criteria are most impactful?
  • Domain generality claims: Results are reported in a single biomedical subdomain (diabetes PubMed). Do the gains hold in other domains (law, finance, materials science) with different ontologies, terminologies, and discourse structures?
  • Evaluation completeness and possible metric bias: FActScore and ValidityScore are central, but their exact computation, reliance on LLM-based judges or external ontologies, and susceptibility to bias are unclear. Are there human evaluations, inter-annotator agreement studies, or adversarial stress tests?
  • Coverage/recall vs. precision trade-offs: The paper emphasizes reliability but does not quantify KG completeness. What is the recall of true facts, how does precision–recall vary with thresholds, and how does the method avoid over-pruning valid but rare facts?
  • Baseline breadth and reproducibility: Beyond one 32B LLM baseline, broader baselines (task-specific IE pipelines, graph-transformer variants, smaller/fine-tuned LLMs) and ablations (MNM vs. MLM, H-GAT on/off, relation embeddings, seed size) are missing. Can code, data splits, and hyperparameters be released for reproducibility?
  • Scalability and efficiency details: Training/inference throughput, memory usage, and cost comparisons vs. LLM and IE baselines are not quantified. How does performance scale with corpus size and model depth/width?
  • Continual learning and updates: How are new documents integrated without catastrophic forgetting? How are outdated or retracted facts detected and removed (KG “right to be forgotten”), and how are versioning and governance handled?
  • Error analysis and long-tail behavior: Which error types dominate (entity boundary vs. relation selection vs. ontology misalignment)? How does the method handle rare entities/relations and sparse-tail distributions?
  • Adversarial robustness and data poisoning: How resilient is the pipeline to adversarial or poisoned inputs (e.g., spurious correlations, fabricated relations), especially given reliance on sentence-level extraction?
  • Cross-lingual and multilingual applicability: Can the framework process non-English corpora and align entities across languages/ontologies? What additional supervision is required for cross-lingual linking?
  • Theoretical grounding of MNM: The proposed masked node modeling objective lacks theoretical analysis. What guarantees (if any) exist regarding sample complexity, bias–variance trade-offs, or convergence properties vs. MLM alone?
  • Tension in using LLMs for evaluation: If GraphRAG or LLM-generated summaries are used for KG evaluation, how is circularity or LLM-induced bias avoided, given the paper’s critique of LLM reliability?
  • Detection of negation/speculation: Biomedical text frequently encodes uncertainty. How does the system identify and filter speculative/negative statements to prevent contaminating the KG with non-facts?
  • New-domain bootstrapping with minimal supervision: What is the smallest viable seed (triples per relation, ontology depth) and minimal text volume for acceptable performance? Can weak supervision or self-training reduce manual seed requirements?
  • Governance and human-in-the-loop protocols: While editability/auditability are touted, concrete workflows (review interfaces, provenance UX, approval processes, inter-annotator agreement, rollback/versioning policies) are unspecified.
  • Ethical, legal, and bias auditing: How are biases in “high-quality” sources detected and mitigated? What privacy/copyright safeguards and compliance measures (especially in clinical domains) are implemented?
  • Integration with downstream reasoning systems: How well does the extracted KG support multi-hop reasoning, constraint checking, and query answering vs. curated KGs? Are there task-level benchmarks (e.g., clinical decision support, biomedical discovery) demonstrating end-to-end gains?
  • Discovery of novel scientific hypotheses: Can the method surface plausible but unverified relations (with uncertainty tagging) for expert triage, and how are such suggestions prevented from polluting the “reliable” KG?
  • Negative sampling and training bias: How are negatives constructed for relation learning without injecting false negatives? What is the impact of negative-sampling schemes on ValidityScore/FActScore?
  • Deduplication and canonicalization: How are synonyms, acronyms, and lexical variants canonically mapped to unique entities? What is the effect of canonicalization errors on graph structure and evaluation metrics?
  • Compatibility with structured/non-textual sources: Can GraphMERT incorporate tables, figures, or structured databases alongside text to improve recall and reduce ambiguity?
  • Confidence calibration and thresholds: How are inclusion thresholds chosen for triples, and are confidence scores well-calibrated for downstream decision-making (e.g., via temperature scaling or conformal methods)?
Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 3 tweets and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: