Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DemiNet: Dependency-Aware Multi-Interest Network with Self-Supervised Graph Learning for Click-Through Rate Prediction (2109.12512v2)

Published 26 Sep 2021 in cs.IR

Abstract: In this paper, we propose a novel model named DemiNet (short for DEpendency-Aware Multi-Interest Network) to address the above two issues. To be specific, we first consider various dependency types between item nodes and perform dependency-aware heterogeneous attention for denoising and obtaining accurate sequence item representations. Secondly, for multiple interests extraction, multi-head attention is conducted on top of the graph embedding. To filter out noisy inter-item correlations and enhance the robustness of extracted interests, self-supervised interest learning is introduced to the above two steps. Thirdly, to aggregate the multiple interests, interest experts corresponding to different interest routes give rating scores respectively, while a specialized network assigns the confidence of each score. Experimental results on three real-world datasets demonstrate that the proposed DemiNet significantly improves the overall recommendation performance over several state-of-the-art baselines. Further studies verify the efficacy and interpretability benefits brought by the fine-grained user interest modeling.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. Graph convolutional matrix completion. arXiv preprint arXiv:1706.02263.
  2. Controllable multi-interest framework for recommendation. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2942–2951.
  3. Sequential Recommendation with Graph Neural Networks. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 378–387.
  4. AIRec: Attentive intersection model for tag-aware recommendation. Neurocomputing, 421: 105–114.
  5. Graph Heterogeneous Multi-Relational Recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 3958–3966.
  6. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 3438–3445.
  7. Exploring Periodicity and Interactivity in Multi-Interest Framework for Sequential Recommendation. arXiv preprint arXiv:2106.04415.
  8. Handling information loss of graph neural networks for session-based recommendation. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1172–1180.
  9. Deep session interest network for click-through rate prediction. arXiv preprint arXiv:1905.06482.
  10. DeepFM: a factorization-machine based neural network for CTR prediction. arXiv preprint arXiv:1703.04247.
  11. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, 639–648.
  12. An efficient neighborhood-based interaction model for recommendation on heterogeneous graph. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 75–84.
  13. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  14. Multi-interest network with dynamic routing for recommendation at Tmall. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 2615–2623.
  15. Extracting attentive social temporal excitation for sequential recommendation. arXiv preprint arXiv:2109.13539.
  16. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1930–1939.
  17. Disentangled self-supervision in sequential recommenders. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 483–491.
  18. Dynamic routing between capsules. arXiv preprint arXiv:1710.09829.
  19. Versagnn: a versatile accelerator for graph neural networks. arXiv preprint arXiv:2105.01280.
  20. Task aligned meta-learning based augmented graph for cold-start recommendation. arXiv preprint arXiv:2208.05716.
  21. Estimating the number of clusters in a data set via the gap statistic. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63(2): 411–423.
  22. Visualizing data using t-SNE. Journal of machine learning research, 9(11).
  23. Attention is all you need. In Advances in neural information processing systems, 5998–6008.
  24. Graph attention networks. arXiv preprint arXiv:1710.10903.
  25. Heterogeneous graph attention network. In The World Wide Web Conference, 2022–2032.
  26. Am-gcn: Adaptive multi-channel graph convolutional networks. In Proceedings of the 26th ACM SIGKDD International conference on knowledge discovery & data mining, 1243–1253.
  27. ICMT: Item Cluster-Wise Multi-Objective Training for Long-Tail Recommendation. arXiv preprint arXiv:2109.12887.
  28. Self-supervised graph learning for recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 726–735.
  29. Session-based recommendation with graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 346–353.
  30. Deep Multi-Interest Network for Click-through Rate Prediction. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2265–2268.
  31. Graph Contextualized Self-Attention Network for Session-based Recommendation. In IJCAI, volume 19, 3940–3946.
  32. Deep interest evolution network for click-through rate prediction. In Proceedings of the AAAI conference on artificial intelligence, volume 33, 5941–5948.
  33. Deep interest network for click-through rate prediction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1059–1068.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yule Wang (12 papers)
  2. Qiang Luo (46 papers)
  3. Yue Ding (49 papers)
  4. Yunzhe Li (28 papers)
  5. Dong Wang (628 papers)
  6. Hongbo Deng (20 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.