Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Large Language Model for Graph Data Understanding in Online Job Recommendations (2307.05722v3)

Published 10 Jul 2023 in cs.AI, cs.CL, and cs.IR

Abstract: LLMs have revolutionized natural language processing tasks, demonstrating their exceptional capabilities in various domains. However, their potential for behavior graph understanding in job recommendations remains largely unexplored. This paper focuses on unveiling the capability of LLMs in understanding behavior graphs and leveraging this understanding to enhance recommendations in online recruitment, including the promotion of out-of-distribution (OOD) application. We present a novel framework that harnesses the rich contextual information and semantic representations provided by LLMs to analyze behavior graphs and uncover underlying patterns and relationships. Specifically, we propose a meta-path prompt constructor that leverages LLM recommender to understand behavior graphs for the first time and design a corresponding path augmentation module to alleviate the prompt bias introduced by path-based sequence input. By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users. We evaluate the effectiveness of our approach on a comprehensive dataset and demonstrate its ability to improve the relevance and quality of recommended quality. This research not only sheds light on the untapped potential of LLMs but also provides valuable insights for developing advanced recommendation systems in the recruitment market. The findings contribute to the growing field of natural language processing and offer practical implications for enhancing job search experiences. We release the code at https://github.com/WLiK/GLRec.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. CoRR, abs/2305.00447.
  2. Learning to match jobs with resumes from sparse interaction data using multi-view co-teaching network. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 65–74.
  3. M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems. CoRR, abs/2205.08084.
  4. Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers. CoRR, abs/2212.10559.
  5. Beyond matching: Modeling two-sided multi-behavioral sequences for dynamic person-job fit. In Database Systems for Advanced Applications: 26th International Conference, DASFAA 2021, Taipei, Taiwan, April 11–14, 2021, Proceedings, Part II 26, 359–375. Springer.
  6. Large Language Models are Zero-Shot Rankers for Recommender Systems. CoRR, abs/2305.08845.
  7. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
  8. Heterogeneous graph transformer. In Proceedings of the web conference 2020, 2704–2710.
  9. BELLE: Be Everyone’s Large Language model Engine. https://github.com/LianjiaTech/BELLE.
  10. Knowledge-Aware Cross-Semantic Alignment for Domain-Level Zero-Shot Recommendation. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 965–975.
  11. Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction. CoRR, abs/2305.06474.
  12. Personalized Job Recommendation System at LinkedIn: Practical Challenges and Lessons Learned. In Proceedings of the Eleventh ACM Conference on Recommender Systems.
  13. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  14. Towards effective and interpretable person-job fitting. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 1883–1892.
  15. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
  16. A recommender system for job seeking and recruiting website. In Proceedings of the 22nd International Conference on World Wide Web.
  17. Resumegan: An optimized deep representation learning framework for talent-job fit via adversarial learning. In Proceedings of the 28th ACM international conference on information and knowledge management, 1101–1110.
  18. What does BERT know about books, movies and music? Probing BERT for Conversational Recommendation. In RecSys, 388–397. ACM.
  19. Enhancing person-job fit for talent recruitment: An ability-aware neural network approach. In The 41st international ACM SIGIR conference on research & development in information retrieval, 25–34.
  20. U-BERT: Pre-training User Representations for Improved Recommendation. In AAAI, 4320–4327. AAAI Press.
  21. Towards deep and representation learning for talent search at linkedin. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, 2253–2261.
  22. A joint learning approach to intelligent job interview assessment. In IJCAI, volume 18, 3542–3548.
  23. Userbert: Contrastive user model pre-training. arXiv preprint arXiv:2109.01274.
  24. Learning the implicit semantic representation on graph-structured data. In Database Systems for Advanced Applications: 26th International Conference, DASFAA 2021, Taipei, Taiwan, April 11–14, 2021, Proceedings, Part I 26, 3–19. Springer.
  25. A Survey on Large Language Models for Recommendation. arXiv preprint arXiv:2305.19860.
  26. Interview Choice Reveals Your Preference on the Market: To Improve Job-Resume Matching through Profiling Memories. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
  27. Modeling Two-Way Selection Preference for Person-Job Fit. In Sixteenth ACM Conference on Recommender Systems.
  28. Untargeted attack against federated recommendation systems via poisonous item embeddings and the defense. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 4854–4863.
  29. Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach. CoRR, abs/2305.07001.
  30. Prompt Learning for News Recommendation. arXiv preprint arXiv:2304.05263.
  31. Cross-Domain Recommendation via Progressive Structural Alignment. IEEE Transactions on Knowledge and Data Engineering.
  32. Generative Job Recommendations with Large Language Model. arXiv:2307.02157.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Likang Wu (25 papers)
  2. Zhaopeng Qiu (13 papers)
  3. Zhi Zheng (46 papers)
  4. Hengshu Zhu (66 papers)
  5. Enhong Chen (242 papers)
Citations (51)

Summary

We haven't generated a summary for this paper yet.