Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GNN-LM: Language Modeling based on Global Contexts via GNN (2110.08743v5)

Published 17 Oct 2021 in cs.CL

Abstract: Inspired by the notion that {\it to copy is easier than to memorize}, in this work, we introduce GNN-LM, which extends the vanilla neural LLM (LM) by allowing to reference similar contexts in the entire training corpus. We build a directed heterogeneous graph between an input context and its semantically related neighbors selected from the training corpus, where nodes are tokens in the input context and retrieved neighbor contexts, and edges represent connections between nodes. Graph neural networks (GNNs) are constructed upon the graph to aggregate information from similar contexts to decode the token. This learning paradigm provides direct access to the reference contexts and helps improve a model's generalization ability. We conduct comprehensive experiments to validate the effectiveness of the GNN-LM: GNN-LM achieves a new state-of-the-art perplexity of 14.8 on WikiText-103 (a 3.9 point improvement over its counterpart of the vanilla LM model), and shows substantial improvement on One Billion Word and Enwiki8 datasets against strong baselines. In-depth ablation studies are performed to understand the mechanics of GNN-LM. \footnote{The code can be found at https://github.com/ShannonAI/GNN-LM

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yuxian Meng (37 papers)
  2. Shi Zong (16 papers)
  3. Xiaoya Li (42 papers)
  4. Xiaofei Sun (36 papers)
  5. Tianwei Zhang (200 papers)
  6. Fei Wu (317 papers)
  7. Jiwei Li (137 papers)
Citations (36)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub