Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DiffNet++: A Neural Influence and Interest Diffusion Network for Social Recommendation (2002.00844v4)

Published 15 Jan 2020 in cs.SI, cs.IR, cs.LG, and stat.ML

Abstract: Social recommendation has emerged to leverage social connections among users for predicting users' unknown preferences, which could alleviate the data sparsity issue in collaborative filtering based recommendation. Early approaches relied on utilizing each user's first-order social neighbors' interests for better user modeling and failed to model the social influence diffusion process from the global social network structure. Recently, we propose a preliminary work of a neural influence diffusion network (i.e., DiffNet) for social recommendation (Diffnet), which models the recursive social diffusion process to capture the higher-order relationships for each user. However, we argue that, as users play a central role in both user-user social network and user-item interest network, only modeling the influence diffusion process in the social network would neglect the users' latent collaborative interests in the user-item interest network. In this paper, we propose DiffNet++, an improved algorithm of DiffNet that models the neural influence diffusion and interest diffusion in a unified framework. By reformulating the social recommendation as a heterogeneous graph with social network and interest network as input, DiffNet++ advances DiffNet by injecting these two network information for user embedding learning at the same time. This is achieved by iteratively aggregating each user's embedding from three aspects: the user's previous embedding, the influence aggregation of social neighbors from the social network, and the interest aggregation of item neighbors from the user-item interest network. Furthermore, we design a multi-level attention network that learns how to attentively aggregate user embeddings from these three aspects. Finally, extensive experimental results on two real-world datasets clearly show the effectiveness of our proposed model.

Overview of DiffNet++: A Neural Influence and Interest Diffusion Network for Social Recommendation

The paper "DiffNet++: A Neural Influence and Interest Diffusion Network for Social Recommendation" addresses the enhancement of social recommendation systems by leveraging both user social networks and user-item interest networks. The authors recognize that traditional collaborative filtering (CF) methods struggle with data sparsity challenges due to limited user behavior data, and propose a model that unifies social influence diffusion with interest diffusion to improve recommendation accuracy.

Key Contributions

The core proposition of the paper is DiffNet++, which extends the preliminary work of DiffNet by integrating both higher-order user influence and interest reflected in social and interest networks, respectively. Unlike models that rely solely on first-order social neighbors, DiffNet++ constructs a heterogeneous graph to encapsulate complex user dynamics:

  • Influence Diffusion: Models the recursive influence process from higher-order social networks.
  • Interest Diffusion: Captures latent collaborative interests from the user-item graph.

For embedding learning, DiffNet++ assimilates user embeddings through iterative aggregation from three sources: prior embeddings, social network aggregation, and item interest network aggregation. A multi-level attention network is proposed to effectively learn how to balance the influences from these aspects.

Numerical Results and Model Performance

The paper presents extensive experiments conducted on four real-world datasets (Yelp, Flickr, Epinions, and Dianping). The evaluation metrics used include Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG), which demonstrate that DiffNet++ consistently outperforms existing methods:

  • Improvement over Baselines: DiffNet++ surpasses state-of-the-art models by significant margins, illustrating its capability to alleviate data sparsity and effectively capture intricate network structures.
  • Effectiveness of Multi-level Attention: By learning attentional weights, the model differentiates user contributions from social neighbors and interest groups, yielding tailored recommendations.

Implications and Future Directions

Practically, the introduction of DiffNet++ represents a robust advancement in recommender systems by effectively addressing sparsity through comprehensive network modeling. Theoretically, the integration of graph neural network modalities for both social and interest networks sets a new precedent in representation learning for recommendation. The multi-level attention mechanism is of particular note, as it allows for personalized aggregation of diverse influences.

Looking forward, this research opens pathways for further exploration in collaborative intelligence frameworks. Potential developments could include expanding the heterogeneity of input data, incorporating temporal dynamics, and enhancing interpretability of the diffusion processes. Moreover, DiffNet++ provides a strong foundation for future work on even larger datasets and more complex user behavior modeling scenarios, showcasing its potential applicability to a broader array of AI-driven recommendation systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Le Wu (47 papers)
  2. Junwei Li (10 papers)
  3. Peijie Sun (48 papers)
  4. Richang Hong (117 papers)
  5. Yong Ge (31 papers)
  6. Meng Wang (1063 papers)
Citations (220)
Youtube Logo Streamline Icon: https://streamlinehq.com