Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Context-Aware Graph Attention Networks (1910.01736v1)

Published 4 Sep 2019 in cs.LG, cs.SI, eess.IV, eess.SP, and stat.ML

Abstract: Graph Neural Networks (GNNs) have been widely studied for graph data representation and learning. However, existing GNNs generally conduct context-aware learning on node feature representation only which usually ignores the learning of edge (weight) representation. In this paper, we propose a novel unified GNN model, named Context-aware Adaptive Graph Attention Network (CaGAT). CaGAT aims to learn a context-aware attention representation for each graph edge by further exploiting the context relationships among different edges. In particular, CaGAT conducts context-aware learning on both node feature representation and edge (weight) representation simultaneously and cooperatively in a unified manner which can boost their respective performance in network training. We apply CaGAT on semi-supervised learning tasks. Promising experimental results on several benchmark datasets demonstrate the effectiveness and benefits of CaGAT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Bo Jiang (235 papers)
  2. Leiling Wang (2 papers)
  3. Jin Tang (139 papers)
  4. Bin Luo (209 papers)
Citations (2)