Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 94 tok/s
Gemini 2.5 Pro 42 tok/s Pro
GPT-5 Medium 13 tok/s
GPT-5 High 17 tok/s Pro
GPT-4o 101 tok/s
GPT OSS 120B 460 tok/s Pro
Kimi K2 198 tok/s Pro
2000 character limit reached

Dual Test-time Training for Out-of-distribution Recommender System (2407.15620v2)

Published 22 Jul 2024 in cs.IR and cs.LG

Abstract: Deep learning has been widely applied in recommender systems, which has achieved revolutionary progress recently. However, most existing learning-based methods assume that the user and item distributions remain unchanged between the training phase and the test phase. However, the distribution of user and item features can naturally shift in real-world scenarios, potentially resulting in a substantial decrease in recommendation performance. This phenomenon can be formulated as an Out-Of-Distribution (OOD) recommendation problem. To address this challenge, we propose a novel Dual Test-Time-Training framework for OOD Recommendation, termed DT3OR. In DT3OR, we incorporate a model adaptation mechanism during the test-time phase to carefully update the recommendation model, allowing the model to specially adapt to the shifting user and item features. To be specific, we propose a self-distillation task and a contrastive task to assist the model learning both the user's invariant interest preferences and the variant user/item characteristics during the test-time phase, thus facilitating a smooth adaptation to the shifting features. Furthermore, we provide theoretical analysis to support the rationale behind our dual test-time training framework. To the best of our knowledge, this paper is the first work to address OOD recommendation via a test-time-training strategy. We conduct experiments on three datasets with various backbones. Comprehensive experimental results have demonstrated the effectiveness of DT3OR compared to other state-of-the-art baselines.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. J. Chen, D. Lian, B. Jin, X. Huang, K. Zheng, and E. Chen, “Fast variational autoencoder with inverted multi-index for collaborative filtering,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 1944–1954.
  2. Y. Wang, C. Li, Z. Liu, M. Li, J. Tang, X. Xie, L. Chen, and P. S. Yu, “An adaptive graph pre-training framework for localized collaborative filtering,” ACM Transactions on Information Systems, vol. 41, no. 2, pp. 1–27, 2022.
  3. J. Chen, D. Lian, Y. Li, B. Wang, K. Zheng, and E. Chen, “Cache-augmented inbatch importance resampling for training recommender retriever,” Advances in Neural Information Processing Systems, vol. 35, pp. 34 817–34 830, 2022.
  4. D. Lian, Y. Ge, F. Zhang, N. J. Yuan, X. Xie, T. Zhou, and Y. Rui, “Scalable content-aware collaborative filtering for location recommendation,” IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 6, pp. 1122–1135, 2018.
  5. D. Lian, X. Xie, and E. Chen, “Discrete matrix factorization and extension for fast item recommendation,” IEEE Transactions on Knowledge and Data Engineering, vol. 33, no. 5, pp. 1919–1933, 2019.
  6. D. Lian, X. Xie, E. Chen, and H. Xiong, “Product quantized collaborative filtering,” IEEE Transactions on Knowledge and Data Engineering, vol. 33, no. 9, pp. 3284–3296, 2020.
  7. X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang, “Lightgcn: Simplifying and powering graph convolution network for recommendation,” in Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, 2020, pp. 639–648.
  8. Q. Wang, H. Yin, Z. Hu, D. Lian, H. Wang, and Z. Huang, “Neural memory streaming recommender networks with adversarial training,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 2467–2475.
  9. W. Fan, Y. Ma, Q. Li, Y. He, E. Zhao, J. Tang, and D. Yin, “Graph neural networks for social recommendation,” in The world wide web conference, 2019, pp. 417–426.
  10. C. Zhao, H. Zhao, X. Li, M. He, J. Wang, and J. Fan, “Cross-domain recommendation via progressive structural alignment,” IEEE Transactions on Knowledge and Data Engineering, 2023.
  11. W. Wang, X. Lin, F. Feng, X. He, M. Lin, and T.-S. Chua, “Causal representation learning for out-of-distribution recommendation,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 3562–3571.
  12. Y. He, Z. Wang, P. Cui, H. Zou, Y. Zhang, Q. Cui, and Y. Jiang, “Causpref: Causal preference learning for out-of-distribution recommendation,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 410–421.
  13. A. Ballas and C. Diou, “Multi-layer representation learning for robust ood image classification,” in Proceedings of the 12th Hellenic Conference on Artificial Intelligence, 2022, pp. 1–4.
  14. J. Li, P. Chen, Z. He, S. Yu, S. Liu, and J. Jia, “Rethinking out-of-distribution (ood) detection: Masked image modeling is all you need,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 11 578–11 589.
  15. J. Ma, C. Zhou, P. Cui, H. Yang, and W. Zhu, “Learning disentangled representations for recommendation,” Advances in neural information processing systems, vol. 32, 2019.
  16. J. Ma, C. Zhou, H. Yang, P. Cui, X. Wang, and W. Zhu, “Disentangled self-supervision in sequential recommenders,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 483–491.
  17. Y. Liu, P. Kothari, B. Van Delft, B. Bellot-Gurlet, T. Mordan, and A. Alahi, “Ttt++: When does self-supervised test-time training fail or thrive?” Advances in Neural Information Processing Systems, vol. 34, pp. 21 808–21 820, 2021.
  18. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning.   PMLR, 2020, pp. 1597–1607.
  19. X. Yang, X. Hu, S. Zhou, X. Liu, and E. Zhu, “Interpolation-based contrastive learning for few-label semi-supervised learning,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–12, 2022.
  20. J. Zbontar, L. Jing, I. Misra, Y. LeCun, and S. Deny, “Barlow twins: Self-supervised learning via redundancy reduction,” arXiv preprint arXiv:2103.03230, 2021.
  21. D. Liang, R. G. Krishnan, M. D. Hoffman, and T. Jebara, “Variational autoencoders for collaborative filtering,” in Proceedings of the 2018 world wide web conference, 2018, pp. 689–698.
  22. J. A. Hartigan and M. A. Wong, “Algorithm as 136: A k-means clustering algorithm,” Journal of the royal statistical society. series c (applied statistics), vol. 28, no. 1, pp. 100–108, 1979.
  23. W. Xia, Q. Wang, Q. Gao, M. Yang, and X. Gao, “Self-consistent contrastive attributed graph clustering with pseudo-label prompt,” IEEE Transactions on Multimedia, 2022.
  24. H. Zhao, X. Yang, Z. Wang, E. Yang, and C. Deng, “Graph debiased contrastive learning with joint representation clustering,” in Proc. IJCAI, 2021, pp. 3434–3440.
  25. R. Bhatia and C. Davis, “A cauchy-schwarz inequality for operators with applications,” Linear algebra and its applications, vol. 223, pp. 119–129, 1995.
  26. Y. Sun, X. Wang, Z. Liu, J. Miller, A. A. Efros, and M. Hardt, “Test-time training for out-of-distribution generalization,” 2019.
  27. X. Wang, H. Jin, A. Zhang, X. He, T. Xu, and T.-S. Chua, “Disentangled graph collaborative filtering,” in Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval, 2020, pp. 1001–1010.
  28. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  29. S. Rendle, “Factorization machines,” in 2010 IEEE International conference on data mining.   IEEE, 2010, pp. 995–1000.
  30. X. He and T.-S. Chua, “Neural factorization machines for sparse predictive analytics,” in Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval, 2017, pp. 355–364.
  31. L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, 2008.
  32. W. Jin, T. Zhao, J. Ding, Y. Liu, J. Tang, and N. Shah, “Empowering graph representation learning with test-time graph transformation,” arXiv preprint arXiv:2210.03561, 2022.
  33. Y. Wang, C. Li, W. Jin, R. Li, J. Zhao, J. Tang, and X. Xie, “Test-time training for graph neural networks,” arXiv preprint arXiv:2210.08813, 2022.
  34. D. Wang, E. Shelhamer, S. Liu, B. Olshausen, and T. Darrell, “Tent: Fully test-time adaptation by entropy minimization,” arXiv preprint arXiv:2006.10726, 2020.
  35. X. Yang, Y. Liu, S. Zhou, S. Wang, W. Tu, Q. Zheng, X. Liu, L. Fang, and E. Zhu, “Cluster-guided contrastive graph clustering network,” in Proceedings of the AAAI conference on artificial intelligence, vol. 37, no. 9, 2023, pp. 10 834–10 842.
  36. X. Yang, C. Tan, Y. Liu, K. Liang, S. Wang, S. Zhou, J. Xia, S. Z. Li, X. Liu, and E. Zhu, “Convert: Contrastive graph clustering with reliable augmentation,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 319–327.
  37. X. Yang, Y. Wang, Y. Liu, Y. Wen, L. Meng, S. Zhou, X. Liu, and E. Zhu, “Mixed graph contrastive network for semi-supervised node classification,” ACM Trans. Knowl. Discov. Data, feb 2024. [Online]. Available: https://doi.org/10.1145/3641549
  38. X. Yang, J. Jiaqi, S. Wang, K. Liang, Y. Liu, Y. Wen, S. Liu, S. Zhou, X. Liu, and E. Zhu, “Dealmvc: Dual contrastive calibration for multi-view clustering,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 337–346.
  39. J. Xia, L. Wu, J. Chen, B. Hu, and S. Z. Li, “Simgrace: A simple framework for graph contrastive learning without data augmentation,” arXiv preprint arXiv:2202.03104, 2022.
  40. Y. Wang, W. Wang, Y. Liang, Y. Cai, J. Liu, and B. Hooi, “Nodeaug: Semi-supervised node classification with data augmentation,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 207–217.
  41. Y. Wang, W. Wang, Y. Liang, Y. Cai, and B. Hooi, “Mixup for node and graph classification,” in Proceedings of the Web Conference 2021, 2021, pp. 3663–3674.
  42. L. Yu, S. Pei, L. Ding, J. Zhou, L. Li, C. Zhang, and X. Zhang, “Sail: Self-augmented graph contrastive learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 8, 2022, pp. 8927–8935.
  43. Y. Liu, W. Tu, S. Zhou, X. Liu, L. Song, X. Yang, and E. Zhu, “Deep graph clustering via dual correlation reduction,” in AAAI Conference on Artificial Intelligence, 2022.
  44. Z. Peng, W. Huang, M. Luo, Q. Zheng, Y. Rong, T. Xu, and J. Huang, “Graph representation learning via graphical mutual information maximization,” in Proceedings of The Web Conference 2020, 2020, pp. 259–270.
  45. B. Jing, C. Park, and H. Tong, “Hdmi: High-order deep multiplex infomax,” in Proceedings of the Web Conference 2021, 2021, pp. 2414–2424.
  46. P. Velickovic, W. Fedus, and W. L. Hamilton, “Deep graph infomax.”
  47. K. Hassani and A. H. Khasahmadi, “Contrastive multi-view representation learning on graphs,” in International Conference on Machine Learning.   PMLR, 2020, pp. 4116–4126.
  48. F.-Y. Sun, J. Hoffmann, V. Verma, and J. Tang, “Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization,” arXiv preprint arXiv:1908.01000, 2019.
  49. Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” Advances in Neural Information Processing Systems, vol. 33, pp. 5812–5823, 2020.
  50. Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Deep Graph Contrastive Representation Learning,” in ICML Workshop on Graph Representation Learning and Beyond, 2020. [Online]. Available: http://arxiv.org/abs/2006.04131
  51. Y. Wang, C. Li, M. Li, W. Jin, Y. Liu, H. Sun, X. Xie, and J. Tang, “Localized graph collaborative filtering,” in Proceedings of the 2022 SIAM International Conference on Data Mining (SDM).   SIAM, 2022, pp. 540–548.
Citations (3)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube