Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs (2006.09242v3)

Published 16 Jun 2020 in cs.CL

Abstract: We present Graformer, a novel Transformer-based encoder-decoder architecture for graph-to-text generation. With our novel graph self-attention, the encoding of a node relies on all nodes in the input graph - not only direct neighbors - facilitating the detection of global patterns. We represent the relation between two nodes as the length of the shortest path between them. Graformer learns to weight these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph. We evaluate Graformer on two popular graph-to-text generation benchmarks, AGENDA and WebNLG, where it achieves strong performance while using many fewer parameters than other approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Martin Schmitt (18 papers)
  2. Leonardo F. R. Ribeiro (25 papers)
  3. Philipp Dufter (21 papers)
  4. Iryna Gurevych (264 papers)
  5. Hinrich Schütze (250 papers)
Citations (8)