Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting Graph based Collaborative Filtering: A Linear Residual Graph Convolutional Network Approach (2001.10167v1)

Published 28 Jan 2020 in cs.IR, cs.LG, and stat.ML

Abstract: Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models by iteratively stacking multiple layers of convolution aggregation operations and non-linear activation operations. Recently, in Collaborative Filtering (CF) based Recommender Systems (RS), by treating the user-item interaction behavior as a bipartite graph, some researchers model higher-layer collaborative signals with GCNs. These GCN based recommender models show superior performance compared to traditional works. However, these models suffer from training difficulty with non-linear activations for large user-item graphs. Besides, most GCN based models could not model deeper layers due to the over smoothing effect with the graph convolution operation. In this paper, we revisit GCN based CF models from two aspects. First, we empirically show that removing non-linearities would enhance recommendation performance, which is consistent with the theories in simple graph convolutional networks. Second, we propose a residual network structure that is specifically designed for CF with user-item interaction modeling, which alleviates the over smoothing problem in graph convolution aggregation operation with sparse user-item interaction data. The proposed model is a linear model and it is easy to train, scale to large datasets, and yield better efficiency and effectiveness on two real datasets. We publish the source code at https://github.com/newlei/LRGCCF.

Revisiting Graph Based Collaborative Filtering: A Linear Residual Graph Convolutional Network Approach

This paper presents a novel approach to enhance graph-based collaborative filtering models by addressing inherent limitations of Graph Convolutional Networks (GCNs) in recommender systems. The work focuses on two major challenges: the unnecessary complexity introduced by non-linear transformations in GCNs, and the over-smoothing effect observed when stacking more layers in the network.

Key Contributions

The authors propose a Linear Residual Graph Convolutional Network (LR-GCCF) model that offers significant improvements over existing GCN-based models in collaborative filtering (CF) settings. This model introduces two critical innovations:

  1. Linear Embedding Propagation: The paper removes non-linear transformations traditionally used in GCNs. This simplification aligns with recent theories in graph convolution, which indicate that non-linearities are not essential for capturing collaborative signals effectively. The linear model becomes easier to train and can scale efficiently to large datasets, resolving both complexity and scalability issues prevalent in previous GCN-based models.
  2. Residual Preference Prediction: Inspired by architectural principles from ResNet in CNNs, this research incorporates a residual learning framework to address the over-smoothing issue. By accumulating user-item interaction signals in a residual fashion across layers, LR-GCCF maintains user-specific unique characteristics while benefiting from high-order collaborative signals. This results in more robust modeling of user preferences without sacrificing diversity.

Numerical Results and Claims

The proposed LR-GCCF model demonstrated marked improvements in recommendation tasks over both classical models like BPR and existing GCN-based models, such as NGCF and PinSage. Notably, LR-GCCF achieved higher Hit Rate (HR) and Normalized Discounted Cumulative Gain (NDCG) scores on the Amazon Books and Gowalla datasets, outperforming competitors across multiple metrics. The linear propagation and residual prediction were identified as key factors driving these improvements. The paper highlighted comparable or superior efficacy even with fewer computational demands, indicating that simpler models are not only sufficient but beneficial in application.

Theoretical and Practical Implications

Theoretically, this work challenges the assumption that non-linear transformations are necessary in GCNs for recommendation tasks. It posits that linearity, when combined with residual learning, can effectively capture and propagate collaborative signals. Practically, the reduced complexity and computational requirements of the LR-GCCF model make it feasible for large-scale applications, potentially broadening the accessibility of advanced recommender systems.

Future Developments

Looking ahead, potential exploration could involve extending this framework to other graph-based tasks beyond recommendation, such as in domains where over-smoothing is a pertinent issue. Additionally, innovations in layer-wise aggregation strategies might further enhance the effectiveness of simpler GCN-like architectures. As research in understanding and mitigating over-smoothing in deep learning evolves, integrating these insights with linear models could unveil more efficient and scalable solutions.

In conclusion, the paper presents a substantial advancement in simplifying yet augmenting GCN-based collaborative filtering models. By strategically addressing complexity and smoothing challenges, LR-GCCF emerges as a strong contender among modern recommender systems, paving a path for future research and application enhancements in graph-based learning contexts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Lei Chen (485 papers)
  2. Le Wu (47 papers)
  3. Richang Hong (117 papers)
  4. Kun Zhang (353 papers)
  5. Meng Wang (1063 papers)
Citations (450)