Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation (2002.02126v4)

Published 6 Feb 2020 in cs.IR and cs.LG

Abstract: Graph Convolution Network (GCN) has become new state-of-the-art for collaborative filtering. Nevertheless, the reasons of its effectiveness for recommendation are not well understood. Existing work that adapts GCN to recommendation lacks thorough ablation analyses on GCN, which is originally designed for graph classification tasks and equipped with many neural network operations. However, we empirically find that the two most common designs in GCNs -- feature transformation and nonlinear activation -- contribute little to the performance of collaborative filtering. Even worse, including them adds to the difficulty of training and degrades recommendation performance. In this work, we aim to simplify the design of GCN to make it more concise and appropriate for recommendation. We propose a new model named LightGCN, including only the most essential component in GCN -- neighborhood aggregation -- for collaborative filtering. Specifically, LightGCN learns user and item embeddings by linearly propagating them on the user-item interaction graph, and uses the weighted sum of the embeddings learned at all layers as the final embedding. Such simple, linear, and neat model is much easier to implement and train, exhibiting substantial improvements (about 16.0\% relative improvement on average) over Neural Graph Collaborative Filtering (NGCF) -- a state-of-the-art GCN-based recommender model -- under exactly the same experimental setting. Further analyses are provided towards the rationality of the simple LightGCN from both analytical and empirical perspectives.

LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation

The paper "LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation" addresses a critical concern within the domain of recommender systems: the complexity and effectiveness of Graph Convolution Networks (GCNs) for Collaborative Filtering (CF). The authors assert that many operations incorporated from traditional GCNs, such as feature transformation and nonlinear activation, do not significantly contribute to the recommendation performance. In fact, these operations may even degrade performance by increasing the difficulty of training.

Key Contributions

First, the authors provide an empirical analysis to support their claim. They show that the two most common designs in GCNs—feature transformation and nonlinear activation—offer negligible benefits for CF tasks. Their rigorous ablation studies indicate that removing these components results in significant improvements, both in terms of lower training loss and higher recommendation accuracy.

Motivated by these findings, the authors propose a new model named Light Graph Convolution Network (LightGCN). LightGCN primarily incorporates only neighborhood aggregation, which is identified as the most fundamental component of GCNs for CF. This simplified architecture avoids not only feature transformations and nonlinear activation but also self-connections, which are typically used in existing models.

Methodology

The architecture of LightGCN can be broken down into two main components:

  1. Light Graph Convolution (LGC): This operation aggregates the embeddings of neighboring nodes (users or items) using a weighted sum, thereby refining the target node's embedding. Unlike traditional GCNs, LightGCN removes the feature transformation matrices and nonlinear activation functions, simplifying the computation to linear propagation of embeddings.
  2. Layer Combination: Following multiple layers of convolution, the embeddings obtained at each layer are combined using a weighted sum to form the final node representation. This approach combats the issue of over-smoothing typically associated with higher-layer GCNs, ensuring that the embeddings remain informative and relevant.

Empirical Evaluation

The efficacy of LightGCN is rigorously evaluated against Neural Graph Collaborative Filtering (NGCF) and other state-of-the-art CF models such as Mult-VAE and GRMF across three benchmark datasets: Gowalla, Yelp2018, and Amazon-Book. The results are compelling, demonstrating that LightGCN consistently outperforms NGCF. For instance, LightGCN shows about 16.0% relative improvement on average in both recall@20 and ndcg@20 metrics over NGCF under identical experimental settings.

Training and Complexity

One of the standout advantages of LightGCN is its simplicity, which translates into more efficient and less complex training compared to NGCF. Without the burden of additional operations like feature transformation and nonlinear activations, LightGCN achieves lower training loss and improved generalization capabilities. The overall parameter complexity is also comparable to standard matrix factorization techniques, making LightGCN a highly efficient alternative for practical deployment.

Analytical Insights

The paper also explores theoretical insights that justify the simplification:

  • Relation to Simplified GCN (SGCN): LightGCN can be seen as a generalization that subsumes the self-connection effect through layer combination.
  • Connection with APPNP: LightGCN's layer combination strategy aligns with APPNP’s approach to combat over-smoothing, thereby ensuring the model's robustness for deeper architectures.
  • Embedding Smoothness: The layer combinations effectively address the smoothness of embeddings, ensuring that user embeddings reflect meaningful proximities in the interaction graph.

Implications and Future Directions

The implications of this work are significant for both the theoretical advancement of GCNs in CF and their practical utility. The findings compel a re-evaluation of complex neural designs in favor of more straightforward, linear models that are easier to train and adjust.

Future research could investigate personalized layer combination weights (αk), thereby enabling user-specific or item-specific modeling of higher-order proximities. Additionally, exploring LightGCN’s integration with various types of side information (e.g., social networks, content) could further broaden its applicability and impact. The potential for fast solutions and streaming in industrial scenarios also provides avenues for future exploration.

In conclusion, LightGCN sets a new benchmark in the development of GCN-based recommendation systems by advocating simplicity and efficiency without compromising performance. This work will likely inspire further innovations in the design of more interpretable and manageable recommendation algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xiangnan He (200 papers)
  2. Kuan Deng (3 papers)
  3. Xiang Wang (279 papers)
  4. Yan Li (505 papers)
  5. Yongdong Zhang (119 papers)
  6. Meng Wang (1063 papers)
Citations (3,080)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com