Papers
Topics
Authors
Recent
Search
2000 character limit reached

Lexical-Semantic Graph Retrieval (LeSeGR)

Updated 16 February 2026
  • Lexical-Semantic Graph Retrieval (LeSeGR) is a paradigm that integrates sparse lexical and dense semantic signals via explicit graph structures for multi-hop, context-aware information access.
  • It employs graph neural network propagation and submodular objectives to balance relevance and diversity across citation, semantic, and knowledge graphs.
  • LeSeGR has demonstrated improved accuracy and efficiency in applications such as scientific QA, vector search, and lexicographic retrieval.

Lexical-Semantic Graph Retrieval (LeSeGR) is a retrieval paradigm that fuses lexical (sparse) and semantic (dense) relevance signals using explicit graph structures. Unlike traditional retrieval, which operates purely in lexical or embedding space, LeSeGR propagates entangled relevance signals through graph representations—whether over citation networks, semantic graphs, or knowledge graphs—enabling context-aware, multi-hop, and diversity-sensitive information access. This approach has been instantiated in scientific QA, vector search, lexicographic KG access, and document retrieval, demonstrating improved effectiveness, efficiency, and robustness across heterogeneous data.

1. Theoretical Foundations and Objectives

LeSeGR addresses fundamental limitations of both classical sparse (lexical) and dense (semantic embedding) retrieval. Sparse retrievers based on term matching (e.g., BM25) excel at precision but cannot capture semantic relationships, while dense retrievers (e.g., BERT embeddings) yield semantic matches but may miss exact or rare term overlap. Conventional hybrid schemes combine sparse and dense scores after independent retrieval, failing to model relational structure and multi-hop context. LeSeGR aims to “entangle” sparse and dense signals during the retrieval phase using an explicit graph backbone. The overarching objectives are:

  • Exploiting both lexical overlap and embedding-based semantics for more robust ranking;
  • Propagating contextual signals through document, citation, or knowledge graphs to capture implicit and explicit relations;
  • Balancing retrieval coverage and diversity to avoid redundancy and capture broader semantic space;
  • Enabling multi-hop and context-enriched retrieval for downstream tasks (e.g., long-form QA, RAG, lexicographic query translation) (Hu et al., 25 Jan 2025, Raja et al., 25 Jul 2025, Sennrich et al., 26 May 2025, Kulkarni et al., 2024).

2. Formalizations and Graph-Based Scoring Functions

LeSeGR instantiates graph-based retrieval by assigning nodes (text chunks, entities, or embeddings) and edges (lexical, semantic, knowledge-based links) in a graph G=(V,E)G=(V, E):

  • Sparse scores: δqi=fsparse(qsparse,cisparse)δ_{qi} = f_{sparse}(q^{sparse}, c_i^{sparse}) (e.g., BM25, multi-vector sparse encoders);
  • Dense scores: αij=MLPφ(cidensecjdense)α_{ij} = \operatorname{MLP}_\varphi(c_i^{dense} ⊖ c_j^{dense}) (element-wise subtraction, MLP projected);
  • Message passing: At each layer kk, messages are aggregated from neighbors with weights δqjαijδ_{qj}·α_{ij}, updating node representations hi(k+1)=AGGk({mj(k)j{i}N(i)})h_i^{(k+1)} = \operatorname{AGG}_k(\{ m_j^{(k)} | j ∈ \{i\}∪\mathcal{N}(i) \});
  • Final ranking: s(q,ci)=fdense(qdense,hi(K))s(q, c_i) = f_{dense}(q^{dense}, h_i^{(K)}), typically via dot-product.

This generalization subsumes hybrid late fusion as a special case (no neighbors), and supports flexible GNN architectures (Graph Transformer, GCN, GAT). For vector search, LeSeGR introduces a submodular objective that maximizes both semantic coverage and diversity:

f(S)=vVmaxsSsim(v,s)+λu,vS,u<v(1sim(u,v))(1)f(S) = \sum_{v \in V} \max_{s \in S} \operatorname{sim}(v, s) + \lambda \sum_{u, v \in S, u<v} (1 - \operatorname{sim}(u, v)) \quad (1)

where SS is the selected subset, sim(,)\operatorname{sim}(\cdot, \cdot) denotes cosine similarity, and λ\lambda tunes the relevance–diversity tradeoff. Greedy algorithms admit (11/e)(1 - 1/e)-approximation guarantees (Raja et al., 25 Jul 2025, Hu et al., 25 Jan 2025).

3. Graph Construction, Context Types, and Pipeline Variants

LeSeGR supports multiple graph constructions depending on the domain and retrieval context:

  • Citation Graphs: Nodes are fixed-length text chunks from papers; edges include intra-document (adjacency, cross-reference) and inter-document citation links. Inter-document edges select top-nn most relevant chunk pairs based on fused sparse+dense affinity.
  • Semantic Graphs for Vector Search: Nodes are embedding vectors. Edges comprise kkNN relations (proximity in embedding space) augmented with knowledge or symbolic links (e.g., from ConceptNet, Wikipedia hyperlinks).
  • Knowledge Graphs for Lexicography: Nodes represent lexemes with rich properties. Edges encode relations such as derivation, semantic shift, and multilingual equivalency (Sennrich et al., 26 May 2025).

The retrieval pipeline generally consists of:

  1. Indexing: Compute and store sparse/dense representations and construct the graph (offline).
  2. Initial Candidate Selection: Retrieve initial candidate set via ANN (e.g., HNSW, FAISS) or template-based matching.
  3. Graph Propagation: Run message passing (e.g., PPR diffusion, GNN layers) over candidate-induced subgraph.
  4. Scoring and Reranking: Fuse scores from direct similarity and graph-propagated signals.
  5. Extraction: Return ranked subgraphs or subqueries for downstream applications.

Efficiency is sustained by restricting propagation to small subgraphs around query-induced seeds and implementing the full pipeline with GPU batching (Raja et al., 25 Jul 2025, Hu et al., 25 Jan 2025, Kulkarni et al., 2024).

4. Applications: Research QA, Vector Search, and Lexicography

LeSeGR has demonstrated effectiveness in multiple domains:

  • Research Question Answering (Research QA): By integrating LeSeGR into contextual retrieval-augmented generation (CG-RAG), scientific QA pipelines show improved retrieval accuracy (e.g., PubMedQA Hit@1 = 0.961 vs. ColBERT's 0.913) and downstream answer quality (PapersWithCodeQA MRR = 0.884, Coherence = 0.956). Multi-hop graph aggregation is crucial for propagating relevance along citation chains (Hu et al., 25 Jan 2025).
  • Vector Search: Submodular semantic compression in LeSeGR outperforms pure top-kk ANN by promoting broader semantic coverage and explicit diversity. Graph-augmented retrieval via personalized PageRank on symbolic and kkNN edges enables retrieval of semantically coherent, but non-local, neighbors. This is of direct relevance for RAG, multi-hop QA, and meaning-centric memory search (Raja et al., 25 Jul 2025).
  • Lexicographic Retrieval and KG Querying: LeSeGR principles underlie large-scale NL-to-SPARQL mappings over Wikidata’s lexicographic module, parameterizing user queries along four axes (property type, multiplicity, monolinguality, and complexity), enabling robust natural-language query translation and retrieval. Empirically, pass@1 (GPT-3.5, 0.87–0.89), granularity ratio, and BLEU scores all benefit from model scale and fusion of structural and lexical cues (Sennrich et al., 26 May 2025).

5. Empirical Evaluation and Comparative Analysis

Empirical results across domains establish several findings:

System / Variant Retrieval Metric Diversity Metric Efficiency
Top-k ANN (HNSW) 0.5174–0.9987 0.0013–0.8068 ms-scale
Semantic Compression (LeSeGR greedy) 0.4798–0.9987 0.0013–0.5718 ms, batched
Graph-Augmented LeSeGR (PPR diffusion) 0.9168–0.9688 0.0671–0.1590 ms, small memory
  • Graph propagation yields highest relevance, but lowest diversity unless symbolic edges are dense.
  • Semantic compression interpolates, providing trade-off via λ\lambda.
  • GPU memory usage and query latency for LeSeGR are minimal (\sim1,921 MB, 403 ms vs. ColBERT's 12,674 MB, 562 ms); pipeline remains practical for large corpora (Raja et al., 25 Jul 2025, Hu et al., 25 Jan 2025).
  • Ablations show Graph Transformer outperforming simpler GNNs; BGE-M3 (sparse) and MiniLM (dense) signal selection consistently yield optimal retrieval (Hu et al., 25 Jan 2025).
  • In lexicographic QA, model generalization remains challenging for unseen NL→SPARQL patterns, but larger models and fused retrieval cues improve structural adaptation and syntactic correctness (Sennrich et al., 26 May 2025).

6. Limitations, Challenges, and Prospects

While LeSeGR defines a powerful and flexible paradigm, several limitations persist:

  • Generalization Gaps: Models trained on templated or narrow queries struggle with compositional novelty, complex logical forms, and multi-hop reasoning not seen during training. Integrating retrieval-augmented generation and compositional generalization benchmarks (e.g., Spider4SPARQL) is recommended to address these gaps (Sennrich et al., 26 May 2025).
  • Scalability and Graph Maintenance: As KG and document graphs evolve (new relations, qualifiers), index and graph augmentation require ongoing updates. Dynamic, query-specific edge weighting via lightweight GNNs is a potential mitigation (Raja et al., 25 Jul 2025).
  • Domain Adaptation: For specialized lexical domains (proteomics, historical linguistics), constructing appropriate symbolic and semantic edge sets is challenging and may demand domain-specific resources.
  • Efficiency–Effectiveness Trade-off: Balancing computational overhead with retrieval quality, especially in high-dimensional or low-resource settings, remains an open area; adaptive parameter selection (λ\lambda, β\beta) and domain-aware graph pruning are active research directions.
  • Future Extensions: Learning-to-rank over graph-induced substructures, tighter integration with LLM-driven feedback, and empirical user studies with domain experts (e.g., lexicographers, scientists) are promising avenues for maturing LeSeGR as a core IR paradigm (Hu et al., 25 Jan 2025, Raja et al., 25 Jul 2025, Sennrich et al., 26 May 2025).

LeSeGR represents an overview of key retrieval principles:

  • Cluster Hypothesis: LexBoost and related systems leverage the notion that documents "clustered" (lexically/semantically) with relevant neighbors are likely relevant; LeSeGR provides a generalized, graph-based instantiation (Kulkarni et al., 2024).
  • Hybrid Retrieval: Rather than post-hoc fusion of sparse and dense signals, LeSeGR propagates entangled scores through graph layers during retrieval—empirically and formally yielding improved contextual relevance.
  • Graph-Augmentation: Overlaying symbolic and kkNN graphs allows for explicit modeling of semantic structure, facilitating multi-hop and context-rich retrieval; this overcomes issues of redundancy and “local trap” in pure ANN settings.
  • Semantic Compression: LeSeGR operationalizes coverage and diversity trade-offs through exact submodular functions, moving beyond simple top-kk ranking.
  • Conversational KG Access: In lexicographic retrieval, multi-dimensional query taxonomies and template-based syntactic mapping are direct applications of LeSeGR's principles in knowledge-graph settings (Sennrich et al., 26 May 2025).

LeSeGR is thus positioned at the intersection of information retrieval, knowledge graph reasoning, semantic search, and hybrid neural–symbolic learning, providing state-of-the-art results and a generalized framework for future research.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Lexical-Semantic Graph Retrieval (LeSeGR).