Harmonizing Human Insights and AI Precision: Hand in Hand for Advancing Knowledge Graph Task (2405.09477v1)
Abstract: Knowledge graph embedding (KGE) has caught significant interest for its effectiveness in knowledge graph completion (KGC), specifically link prediction (LP), with recent KGE models cracking the LP benchmarks. Despite the rapidly growing literature, insufficient attention has been paid to the cooperation between humans and AI on KG. However, humans' capability to analyze graphs conceptually may further improve the efficacy of KGE models with semantic information. To this effect, we carefully designed a human-AI team (HAIT) system dubbed KG-HAIT, which harnesses the human insights on KG by leveraging fully human-designed ad-hoc dynamic programming (DP) on KG to produce human insightful feature (HIF) vectors that capture the subgraph structural feature and semantic similarities. By integrating HIF vectors into the training of KGE models, notable improvements are observed across various benchmarks and metrics, accompanied by accelerated model convergence. Our results underscore the effectiveness of human-designed DP in the task of LP, emphasizing the pivotal role of collaboration between humans and AI on KG. We open avenues for further exploration and innovation through KG-HAIT, paving the way towards more effective and insightful KG analysis techniques.
- M. Destandau and J.-D. Fekete, “The missing path: Analysing incompleteness in knowledge graphs,” Information Visualization, vol. 20, no. 1, pp. 66–82, 2021.
- Z. Chen, Y. Wang, B. Zhao, J. Cheng, X. Zhao, and Z. Duan, “Knowledge graph completion: A review,” IEEE Access, vol. 8, pp. 192 435–192 456, 2020.
- A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko, “Translating embeddings for modeling multi-relational data,” Advances in neural information processing systems, vol. 26, 2013.
- Z. Zhu, Z. Zhang, L.-P. Xhonneux, and J. Tang, “Neural bellman-ford networks: A general graph neural network framework for link prediction,” in Neural Information Processing Systems, 2021.
- T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in Advances in Neural Information Processing Systems, C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger, Eds., vol. 26. Curran Associates, Inc., 2013.
- A. Rossi, D. Barbosa, D. Firmani, A. Matinata, and P. Merialdo, “Knowledge graph embedding for link prediction: A comparative analysis,” ACM Transactions on Knowledge Discovery from Data (TKDD), vol. 15, no. 2, pp. 1–49, 2021.
- M. Steyvers, H. Tejeda, G. Kerrigan, and P. Smyth, “Bayesian modeling of human–ai complementarity,” Proceedings of the National Academy of Sciences, vol. 119, no. 11, p. e2111547119, 2022.
- I. Munyaka, Z. Ashktorab, C. Dugan, J. Johnson, and Q. Pan, “Decision making strategies and team efficacy in human-ai teams,” Proc. ACM Hum.-Comput. Interact., vol. 7, no. CSCW1, apr 2023.
- R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, and C. Finn, “Direct preference optimization: Your language model is secretly a reward model,” in Advances in Neural Information Processing Systems, A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., vol. 36. Curran Associates, Inc., 2023, pp. 53 728–53 741.
- J. Insa-Cabrera, D. L. Dowe, S. España-Cubillo, M. V. Hernández-Lloreda, and J. Hernández-Orallo, “Comparing humans and ai agents,” in Artificial General Intelligence, J. Schmidhuber, K. R. Thórisson, and M. Looks, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 122–132.
- S. Cao, W. Jiang, J. L. Wang, and B. Yang, “From man vs. machine to man + machine: The art and ai of stock analyses,” National Bureau of Economic Research, Working Paper 28800, May 2021.
- N. Bard, J. N. Foerster, S. Chandar, N. Burch, M. Lanctot, H. F. Song, E. Parisotto, V. Dumoulin, S. Moitra, E. Hughes, I. Dunning, S. Mourad, H. Larochelle, M. G. Bellemare, and M. Bowling, “The hanabi challenge: A new frontier for ai research,” Artificial Intelligence, vol. 280, p. 103216, 2020.
- Z. Ashktorab, Q. V. Liao, C. Dugan, J. Johnson, Q. Pan, W. Zhang, S. Kumaravel, and M. Campbell, “Human-ai collaboration in a cooperative game setting: Measuring social perception and outcomes,” Proc. ACM Hum.-Comput. Interact., vol. 4, no. CSCW2, oct 2020.
- M. Hanafi, Y. Katsis, I. Jindal, and L. Popa, “A comparative analysis between human-in-the-loop systems and large language models for pattern extraction tasks,” in Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances), E. Dragut, Y. Li, L. Popa, S. Vucetic, and S. Srivastava, Eds. Abu Dhabi, United Arab Emirates (Hybrid): Association for Computational Linguistics, Dec. 2022, pp. 43–50.
- N. C. Chung, “Human in the loop for machine creativity,” 2021.
- T. Kaufmann, P. Weng, V. Bengs, and E. Hüllermeier, “A survey of reinforcement learning from human feedback,” 2023.
- E. A. G. de Souza, M. S. Nagano, and G. A. Rolim, “Dynamic programming algorithms and their applications in machine scheduling: A review,” Expert Systems with Applications, vol. 190, p. 116180, 2022.
- E. A. G. de Souza, M. S. Nagano, and G. A. Rolim, “Dynamic programming algorithms and their applications in machine scheduling: A review,” Expert Systems with Applications, vol. 190, p. 116180, 2022.
- O. D. Moor, “Categories, relations and dynamic programming,” Mathematical Structures in Computer Science, vol. 4, no. 1, p. 33–69, 1994.
- R. Bellman, “On the theory of dynamic programming,” Proceedings of the National Academy of Sciences, vol. 38, no. 8, pp. 716–719, 1952.
- A. Agra, M. C. Santos, D. Nace, and M. Poss, “A dynamic programming approach for a class of robust optimization problems,” SIAM Journal on Optimization, vol. 26, no. 3, pp. 1799–1823, 2016.
- C.-L. Wang, “The principle and models of dynamic programming,” Journal of Mathematical Analysis and Applications, vol. 118, no. 2, pp. 287–308, 1986.
- J. M. M. van Rooij, H. L. Bodlaender, and P. Rossmanith, “Dynamic programming on tree decompositions using generalised fast subset convolution,” in Algorithms - ESA 2009, A. Fiat and P. Sanders, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 566–577.
- H. Chen, C. Chu, and J.-M. Proth, “An improvement of the lagrangean relaxation approach for job shop scheduling: a dynamic programming method,” IEEE Transactions on Robotics and Automation, vol. 14, no. 5, pp. 786–795, 1998.
- T. S. Abdul-Razaq and C. N. Potts, “Dynamic programming state-space relaxation for single-machine scheduling,” Journal of the Operational Research Society, vol. 39, no. 2, pp. 141–152, 1988.
- A. J. Dudzik and P. Veličković, “Graph neural networks are dynamic programmers,” in Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, Eds., vol. 35. Curran Associates, Inc., 2022, pp. 20 635–20 647.
- L. Brusca, L. C. Quaedvlieg, S. Skoulakis, G. Chrysos, and V. Cevher, “Maximum independent set: Self-training through dynamic programming,” in Advances in Neural Information Processing Systems, A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., vol. 36. Curran Associates, Inc., 2023, pp. 40 637–40 658.
- F. Yang, T. Jin, T.-Y. Liu, X. Sun, and J. Zhang, “Boosting dynamic programming with neural networks for solving np-hard problems,” in Proceedings of The 10th Asian Conference on Machine Learning, ser. Proceedings of Machine Learning Research, J. Zhu and I. Takeuchi, Eds., vol. 95. PMLR, 14–16 Nov 2018, pp. 726–739.
- Z. Wang, J. Zhang, J. Feng, and Z. Chen, “Knowledge graph embedding by translating on hyperplanes,” in Proceedings of the AAAI conference on artificial intelligence, vol. 28, no. 1, 2014.
- Z. Sun, Z.-H. Deng, J.-Y. Nie, and J. Tang, “Rotate: Knowledge graph embedding by relational rotation in complex space,” arXiv preprint arXiv:1902.10197, 2019.
- B. Yang, W.-t. Yih, X. He, J. Gao, and L. Deng, “Embedding entities and relations for learning and inference in knowledge bases,” arXiv preprint arXiv:1412.6575, 2014.
- I. Balažević, C. Allen, and T. M. Hospedales, “Tucker: Tensor factorization for knowledge graph completion,” arXiv preprint arXiv:1901.09590, 2019.
- T. Dettmers, P. Minervini, P. Stenetorp, and S. Riedel, “Convolutional 2d knowledge graph embeddings,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
- D. Nathani, J. Chauhan, C. Sharma, and M. Kaul, “Learning attention-based embeddings for relation prediction in knowledge graphs,” arXiv preprint arXiv:1906.01195, 2019.
- J. He, L. Jia, L. Wang, X. Li, and X. Xu, “Mocosa: Momentum contrast for knowledge graph completion with structure-augmented pre-trained language models,” arXiv preprint arXiv:2308.08204, 2023.
- K. Toutanova and D. Chen, “Observed versus latent features for knowledge base and text inference,” in Proceedings of the 3rd workshop on continuous vector space models and their compositionality, 2015, pp. 57–66.
- X. Wang, X. He, Y. Cao, M. Liu, and T.-S. Chua, “Kgat: Knowledge graph attention network for recommendation,” in Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 950–958.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.