Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FedDCSR: Federated Cross-domain Sequential Recommendation via Disentangled Representation Learning (2309.08420v7)

Published 15 Sep 2023 in cs.LG and cs.IR

Abstract: Cross-domain Sequential Recommendation (CSR) which leverages user sequence data from multiple domains has received extensive attention in recent years. However, the existing CSR methods require sharing origin user data across domains, which violates the General Data Protection Regulation (GDPR). Thus, it is necessary to combine federated learning (FL) and CSR to fully utilize knowledge from different domains while preserving data privacy. Nonetheless, the sequence feature heterogeneity across different domains significantly impacts the overall performance of FL. In this paper, we propose FedDCSR, a novel federated cross-domain sequential recommendation framework via disentangled representation learning. Specifically, to address the sequence feature heterogeneity across domains, we introduce an approach called inter-intra domain sequence representation disentanglement (SRD) to disentangle the user sequence features into domain-shared and domain-exclusive features. In addition, we design an intra domain contrastive infomax (CIM) strategy to learn richer domain-exclusive features of users by performing data augmentation on user sequences. Extensive experiments on three real-world scenarios demonstrate that FedDCSR achieves significant improvements over existing baselines.

Citations (5)

Summary

  • The paper’s main contribution is FedDCSR, a framework that integrates federated learning with sequential recommendation to safeguard privacy and enhance performance.
  • It introduces a novel disentangled representation approach that separates domain-shared and domain-exclusive features to manage cross-domain feature heterogeneity.
  • Empirical results on Amazon benchmarks demonstrate significant improvements in metrics like MRR, HR@10, and NDCG@10 over existing state-of-the-art methods.

FedDCSR: Federated Cross-Domain Sequential Recommendation via Disentangled Representation Learning

FedDCSR presents a novel federated cross-domain sequential recommendation framework addressing privacy concerns and performance limitations in cross-domain sequential recommendation (CSR). Traditional CSR systems often require sharing user data across domains, which conflicts with GDPR regulations. The integration of federated learning (FL) in CSR addresses this by keeping data decentralized. However, sequence feature heterogeneity across different domains poses a critical challenge, where naive application of classical FL methods like FedAvg can lead to suboptimal results.

The paper introduces FedDCSR, a framework leveraging disentangled representation learning to manage feature heterogeneity effectively. FedDCSR employs a method named inter-intra domain sequence representation disentanglement (SRD), segregating user sequence features into domain-shared and domain-exclusive components. This decomposition allows for precise, targeted learning where domain-shared features facilitate inter-domain knowledge transfer, while domain-exclusive features cater to local domain-specific preferences.

To further enhance domain-specific features, the framework implements an intra domain contrastive infomax (CIM) strategy. It applies data augmentation on user sequences to extract richer, domain-exclusive features, thereby maximizing the mutual information within a domain. The empirical results on Amazon datasets substantiate the efficacy of FedDCSR, showing it significantly outperforms existing state-of-the-art methods across various metrics like MRR, HR@10, and NDCG@10.

Strong Numerical Results and Bold Claims

The experimental outcomes demonstrate FedDCSR's superiority, with notable performance improvements over multiple baselines. In scenarios such as Food-Kitchen-Clothing-Beauty (FKCB), Movie-Book-Game (MBG), and Sports-Garden-Home (SGH), FedDCSR consistently shows higher metric values, reflecting its capability to handle feature heterogeneity and privacy constraints. The framework’s ability to achieve significant gains without sharing raw user data is particularly compelling for privacy-sensitive applications.

Implications and Future Directions

The implications of FedDCSR are manifold. Practically, it offers a viable solution for corporations aiming to harness inter-domain interactions without compromising user privacy. Theoretically, the disentangled approach and contrastive learning strategy pave the way for scalable, collaborative models in federated environments. Future developments could explore integrating other forms of contrastive learning or expanding to more diverse and complex datasets. Additionally, adaptations to the communication efficiency of federated systems can be examined to further enhance real-world applicability.

FedDCSR sets a foundational step in federated CSR by effectively disentangling and leveraging domain-specific and domain-shared features while upholding stringent privacy standards. The introduction of strategies like SRD and CIM significantly enhances the model’s adaptiveness and performance, charting a path for further innovations in federated recommendation systems.