Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Decoupling Attention Markov Networks for Semi-supervised Graph Node Classification (2104.13718v2)

Published 28 Apr 2021 in cs.LG and cs.AI

Abstract: Graph neural networks (GNN) have been ubiquitous in graph node classification tasks. Most of GNN methods update the node embedding iteratively by aggregating its neighbors' information. However, they often suffer from negative disturbance, due to edges connecting nodes with different labels. One approach to alleviate this negative disturbance is to use attention to learn the weights of aggregation, but current attention-based GNNs only consider feature similarity and also suffer from the lack of supervision. In this paper, we consider the label dependency of graph nodes and propose a decoupling attention mechanism to learn both hard and soft attention. The hard attention is learned on labels for a refined graph structure with fewer inter-class edges, so that the aggregation's negative disturbance can be reduced. The soft attention aims to learn the aggregation weights based on features over the refined graph structure to enhance information gains during message passing. Particularly, we formulate our model under the EM framework, and the learned attention is used to guide the label propagation in the M-step and the feature propagation in the E-step, respectively. Extensive experiments are performed on six well-known benchmark graph datasets to verify the effectiveness of the proposed method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jie Chen (602 papers)
  2. Shouzhen Chen (6 papers)
  3. Mingyuan Bai (8 papers)
  4. Jian Pu (39 papers)
  5. Junping Zhang (65 papers)
  6. Junbin Gao (111 papers)
Citations (18)