Papers
Topics
Authors
Recent
2000 character limit reached

Structure-aware Text Embeddings

Updated 13 January 2026
  • Structure-aware text embeddings are vector representations that integrate explicit structural signals (graphs, tables, trees) into text models for richer contextual understanding.
  • They employ various techniques such as spectral graph encoding, dual graph aggregation, and contrastive fusion to capture relational and positional information.
  • These embeddings improve performance on tasks like text-to-SQL, link prediction, and document modeling by boosting relational reasoning and schema alignment.

Structure-aware text embeddings are vector representations of textual units (tokens, sentences, paragraphs, documents) that explicitly incorporate information about the data’s underlying structure—such as graphs, tables, trees, or networks—during the embedding process. Unlike standard text representations that rely solely on sequential or local context, structure-aware approaches utilize syntactic, semantic, or relational signals from external or induced structures, enabling improved performance on tasks requiring relational reasoning, entity tracking, or schema alignment.

1. Theoretical Foundations and Formalisms

Structure-aware text embeddings are grounded in the view that semantics and meaning often arise from inter-token or inter-node relations not captured by linear sequences. Typical structures include directed or undirected graphs (AMRs, citation networks), tables (field-value schema), bipartite networks (TINs), parse trees, and knowledge graphs.

Formally, given a structure G=(V,E)\mathcal{G} = (V, E) comprising nodes VV (e.g., words, entities, records) and edges EE (e.g., syntactic dependencies, hyperlinks, purchase history), a mapping

f:(T,G)Rdf: (T, \mathcal{G}) \mapsto \mathbb{R}^d

produces structure-aware embeddings of tokens or segments TT conditioned on G\mathcal{G}. Structural signals may be encoded via dedicated modules (e.g., GNNs (Montella et al., 2023), spectral encodings (Kamel et al., 15 Jul 2025), attention over graph topology (Wang et al., 7 Apr 2025), or dual aggregation across graphs (Cai et al., 2021)) or via contrastive alignment between textual and structural objectives (Theodoropoulos et al., 2021).

A plausible implication is that this design enables improved relational reasoning and context sensitivity in downstream applications.

2. Methodological Variants

Spectral Graph Encoding (SAFT for AMR-to-Text)

The SAFT framework (Kamel et al., 15 Jul 2025) instantiates structure-aware embeddings via magnetic Laplacian-based positional encoding for directed graphs, specifically AMRs. For a graph with adjacency A{0,1}n×nA \in \{0,1\}^{n \times n},

  • Compute symmetrized adjacency AS=AAA_S = A \vee A^\top
  • Form phase matrix Θu,v(q)=2πq(Au,vAv,u)\Theta^{(q)}_{u,v} = 2\pi q (A_{u,v} - A_{v,u})
  • Construct Hermitian magnetic Laplacian L(q)=DSASexp(iΘ(q))L^{(q)} = D_S - A_S \circ \exp(i \Theta^{(q)})
  • Extract the kk smallest eigenvectors Γ:,1...k\Gamma_{:,1...k}
  • Split spectral embeddings into real and imaginary parts: PEnode(vi)R2kPE_{node}(v_i) \in \mathbb{R}^{2k}
  • Concatenate with intra-node sinusoidal PE, project to LLM space via MLP, inject into token embeddings:

E=E+AmrPEE' = E + \text{AmrPE}

Direction-sensitive PEs encode relative node positions and edge flows, maintaining structural fidelity in LLM input representations.

Dual Graph Aggregation (SADGA for Text-to-SQL)

SADGA (Cai et al., 2021) encodes both question and schema as graphs, applies a GGNN encoder, and aggregates representations bidirectionally (global, local, dual context) via attention and gating. The mechanism fuses information between the query and schema graphs, updating node embeddings to reflect both self and neighbor context. These aggregated node representations are concatenated and processed via a relation-aware transformer prior to decoding.

In-Process Structural Conditioning (Struc-EMB)

Struc-EMB (Liu et al., 9 Oct 2025) injects neighboring segment information (hyperlinks, citations) into LLMs during embedding construction, either by:

  • Sequential concatenation: Eseq(u,N)E_{seq}(u,\mathcal{N})
  • Parallel key-value caching: Epar(u,N)E_{par}(u,\mathcal{N})
  • Context distillation, semantic balancing for adaptive structural-context blending

The method directly leverages relational structure in the internal encoding state, providing a blueprint for context-integrated embeddings.

Adversarial and Contrastive Fusion

ACNE (Gracious et al., 2020) and contrastive CLGS/CLDR (Theodoropoulos et al., 2021) frame embedding learning as a minimax game or contrastive objective between text and structure. ACNE:

  • Maintains a modality fusion model where a generator produces structure embeddings and a discriminator scores text-based edge plausibility
  • Incorporates mutual and topological attention for edge-specific context

Contrastive approaches train encoders to align graph-aware vectors with corresponding text, enforcing separation from hard negatives.

Field-Gating and Table Schema (Table-to-Text)

Field-gating encoders (Liu et al., 2017) and dual attention mechanisms assign each input token a field-structure vector (field name, position), controlling how much schema information enters the cell state via field gates. The decoder computes combined word-level and field-level attention to harmonize content and structural relevance.

Adapter-based Structural Injection (StructAdapt)

StructAdapt (Montella et al., 2023) grafts a GNN onto PLM encoder adapters, running message passing on subword graphs built from AMR reification, and optionally leveraging T5 relative position embeddings (RPEs) to preserve adjacency signals.

3. Empirical Evidence and Comparative Results

Quantitative experiments reveal consistent performance gains for structure-aware embeddings:

  • SAFT (Kamel et al., 15 Jul 2025): Yields up to +3.5 BLEU improvement over conventional AMR-to-text fine-tuning. Gains scale with AMR depth, up to +6 BLEU on document-level test sets.
  • SADGA (Cai et al., 2021): Achieves +2.0pt exact-match accuracy (GloVe) vs RATSQL on Spider; BERT-large variant closes the gap further, especially on extra-hard queries.
  • Struc-EMB (Liu et al., 9 Oct 2025): Sequential concat (Struc-Emb-Seq) outperforms vanilla and post-hoc methods by +7.8 nDCG@10 on MuSiQue retrieval; parallel caching is more efficient for long-context tasks.
  • ACNE (Gracious et al., 2020): Improves link prediction AUC by up to +12% for unseen nodes; node classification Macro-F1 increases by up to +2%.
  • Table-to-text models (Liu et al., 2017): Field-gating and dual attention yield +2.8 BLEU and +1.2 BLEU increases, respectively, over vanilla seq2seq.
  • StructAdapt (Montella et al., 2023): RPE alone increases BLEU by ~+26 over no-RPE baselines; GNN adapters without RPE show +18–23 gains; RPE+StructAdapt add another +10 BLEU.

Table: Representative Performance Improvements

Approach/Domain Structure-Aware ∆ vs Baseline Task
SAFT (AMR) +0.3–2.3 BLEU AMR 3.0 Sentence-level Gen
SAFT (DocAMR) +4.5–8.0 BLEU DocAMR Zero-shot Gen
SADGA +2.0 pts Exact-match Spider Text-to-SQL
Struc-EMB (Seq) +7.8 nDCG@10 Multi-hop Retrieval (MuSiQue)
ACNE +7% AUC (seen), +12% (unseen) Link Prediction
Table2Text +2.8 BLEU WikiBio Table-to-Text

This table presents only selected deltas as described verbatim in the referenced works.

4. Structural Representation Techniques

Approaches to encoding structure within embeddings vary by domain and objective:

This diversity of techniques directly reflects the time- and computation-sensitive requirements of their respective tasks.

5. Diagnostics, Limitations, and Analysis

Structure-aware embeddings yield best results when the underlying structure is nontrivial, and complexity stratification indicates that gains grow with graph depth or relational density (Kamel et al., 15 Jul 2025, Wang et al., 7 Apr 2025). Ablation studies consistently find that:

  • Omitting structural components (e.g. user/item message passing, distance/centrality embeddings) reduces classification metrics by 1–3 Macro-F1 points (Wang et al., 7 Apr 2025).
  • Relative position encodings in Transformers encode implicit graph-like biases; their removal degrades accuracy substantially, even for models with explicit GNN adapters (Montella et al., 2023).
  • Semantic balancing adjusts for over-emphasis on context, tuning representations via interpolation between pure and structure-enriched embeddings (Liu et al., 9 Oct 2025).

A plausible implication is that both explicit and implicit mechanisms contribute to structural sensitivity, and their effects may interact nonlinearly.

6. Broader Applications and Generalization

Structure-aware text embeddings have demonstrated efficacy in:

These techniques can be applied wherever textual phenomena derive meaning from structured data, especially as language modeling tasks extend into domains with non-sequential context (e.g., scientific literature, e-commerce, social networks).

7. Open Directions and Considerations

Empirical analyses suggest that structural information is critical in nontrivial graph contexts, especially when supporting zero-shot generalization, entity tracking, or coreference resolution (Kamel et al., 15 Jul 2025, Cai et al., 2021). Limitations include:

  • Computational overhead for large graphs or many context segments (Struc-Emb-Seq quadratic scaling; necessity for sampling in TINs) (Liu et al., 9 Oct 2025, Wang et al., 7 Apr 2025)
  • Difficulty disentangling the contribution of induced positional encodings versus explicit graph features (Montella et al., 2023)
  • Sensitivity to noise in contextual segments, mitigated by distillation/balancing approaches (Liu et al., 9 Oct 2025)
  • Restricted bidirectional encoding in some in-process methods, requiring architectural adaptation (Liu et al., 9 Oct 2025)

Further research into disentangling positional bias from explicit connectivity, learning flexible structural adapters, and extending these paradigms to multimodal networks and higher-order relational structures is warranted.

References

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Structure-aware Text Embeddings.