Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NCAGC: A Neighborhood Contrast Framework for Attributed Graph Clustering (2206.07897v2)

Published 16 Jun 2022 in cs.CV

Abstract: Attributed graph clustering is one of the most fundamental tasks among graph learning field, the goal of which is to group nodes with similar representations into the same cluster without human annotations. Recent studies based on graph contrastive learning method have achieved remarkable results when exploit graph-structured data. However, most existing methods 1) do not directly address the clustering task, since the representation learning and clustering process are separated; 2) depend too much on data augmentation, which greatly limits the capability of contrastive learning; 3) ignore the contrastive message for clustering tasks, which adversely degenerate the clustering results. In this paper, we propose a Neighborhood Contrast Framework for Attributed Graph Clustering, namely NCAGC, seeking for conquering the aforementioned limitations. Specifically, by leveraging the Neighborhood Contrast Module, the representation of neighbor nodes will be 'push closer' and become clustering-oriented with the neighborhood contrast loss. Moreover, a Contrastive Self-Expression Module is built by minimizing the node representation before and after the self-expression layer to constraint the learning of self-expression matrix. All the modules of NCAGC are optimized in a unified framework, so the learned node representation contains clustering-oriented messages. Extensive experimental results on four attributed graph datasets demonstrate the promising performance of NCAGC compared with 16 state-of-the-art clustering methods. The code is available at https://github.com/wangtong627/NCAGC.

Citations (5)

Summary

We haven't generated a summary for this paper yet.