Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning (2110.02027v3)

Published 5 Oct 2021 in cs.AI

Abstract: Contrastive Learning (CL) has emerged as a dominant technique for unsupervised representation learning which embeds augmented versions of the anchor close to each other (positive samples) and pushes the embeddings of other samples (negatives) apart. As revealed in recent studies, CL can benefit from hard negatives (negatives that are most similar to the anchor). However, we observe limited benefits when we adopt existing hard negative mining techniques of other domains in Graph Contrastive Learning (GCL). We perform both experimental and theoretical analysis on this phenomenon and find it can be attributed to the message passing of Graph Neural Networks (GNNs). Unlike CL in other domains, most hard negatives are potentially false negatives (negatives that share the same class with the anchor) if they are selected merely according to the similarities between anchor and themselves, which will undesirably push away the samples of the same class. To remedy this deficiency, we propose an effective method, dubbed \textbf{ProGCL}, to estimate the probability of a negative being true one, which constitutes a more suitable measure for negatives' hardness together with similarity. Additionally, we devise two schemes (i.e., \textbf{ProGCL-weight} and \textbf{ProGCL-mix}) to boost the performance of GCL. Extensive experiments demonstrate that ProGCL brings notable and consistent improvements over base GCL methods and yields multiple state-of-the-art results on several unsupervised benchmarks or even exceeds the performance of supervised ones. Also, ProGCL is readily pluggable into various negatives-based GCL methods for performance improvement. We release the code at \textcolor{magenta}{\url{https://github.com/junxia97/ProGCL}}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jun Xia (76 papers)
  2. Lirong Wu (67 papers)
  3. Ge Wang (214 papers)
  4. Jintao Chen (9 papers)
  5. Stan Z. Li (223 papers)
Citations (102)

Summary

  • The paper introduces ProGCL, which redefines hard negative mining in graph contrastive learning by using a two-component Beta Mixture Model to differentiate true negatives.
  • It presents two enhancement schemes, ProGCL-weight and ProGCL-mix, that integrate the refined negative hardness measure into existing GCL frameworks.
  • Extensive experiments on datasets like Amazon-Photo and Coauthor-CS show that ProGCL consistently boosts unsupervised node classification and rivals some supervised approaches.

Rethinking Hard Negative Mining in Graph Contrastive Learning: Insights into ProGCL

"ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning," led by Jun Xia et al., addresses the intricacies of hard negative mining within Graph Contrastive Learning (GCL). The authors challenge the traditional methods from contrastive learning (CL) in other domains and illustrate why these methods are ineffective in GCL due to the unique properties of Graph Neural Networks (GNNs).

Problem Statement and Contributions

Contrastive Learning has demonstrated significant success in unsupervised representation learning across various fields. It relies on distinguishing between positive samples, which remain close in the embedding space, and negative samples, which are actively pushed apart. However, when applied to GCL, standard hard negative mining techniques have shown limited benefits and may even degrade performance. The paper identifies the culprit as the GNN’s message passing scheme, which inadvertently accentuates the likelihood of false negatives—negative samples semantically similar to the anchor, masquerading as hard negatives.

In this context, the authors introduce ProGCL, an innovative methodology designed to efficiently evaluate the hardness of a negative sample by considering both its similarity to the anchor and its probability of being a true negative. The approach utilizes a two-component Beta Mixture Model (BMM) to distinguish true negative samples, thus offering a more sophisticated measure of hardness.

The paper makes several notable contributions:

  1. BMM Application: It proposes using BMM to more accurately assess the likelihood of a negative sample being truly negative rather than falsely negative, based on the similarity distribution.
  2. Performance Enhancement Schemes: The authors develop two schemes—ProGCL-weight and ProGCL-mix—to improve GCL methods by incorporating the new hardness measure into their frameworks.
  3. Empirical Validation: Extensive experiments reveal that ProGCL consistently outperforms standard GCL models and even some supervised approaches across multiple datasets, emphasizing its utility and adaptability.

Experimental Findings

The experimental results underscore ProGCL’s efficacy. When integrated into base GCL methods like GRACE and GCA, ProGCL significantly improves unsupervised node classification tasks, with particularly remarkable results observed in large-scale datasets such as Amazon-Photo and Coauthor-CS. ProGCL not only lifts the performance over unsupervised baselines but also competes favorably against some supervised models, demonstrating its robustness and scalability.

Moreover, ProGCL's adaptability is demonstrated through its integration into various existing GCL methods beyond its primary scope, such as MERIT, where it shows consistent improvements.

Theoretical Implications

From a theoretical standpoint, ProGCL's approach to negative mining in node-level GCL represents an advancement in understanding the sampling bias typically afflicting GCL. By leveraging the inherent distribution properties revealed through a mixture model, this method bridges a critical gap left by traditional negative mining techniques applied in other contexts, making it particularly suited for graph-based data.

Future Directions

The authors suggest several potential directions for future research: expanding the methodology to encompass more real-world applications like social analysis and drug discovery, and further dissecting the theoretical foundations for the success of contrastive learning. These proposed avenues could lead to significant advancements in GCL applications, enhancing both the theoretical framework and practical deployment of AI-driven insight on graphs.

Conclusion

In sum, "ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning" enhances the repertoire of techniques used in unsupervised graph representation learning by addressing a crucial pitfall in existing methodologies. Through a nuanced understanding of GNN message passing, combined with a sophisticated probabilistic assessment of negatives, ProGCL appears poised to drive forward the frontier of graph-based CL.

Github Logo Streamline Icon: https://streamlinehq.com