Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Uncertainty-Aware Relational Graph Neural Network for Few-Shot Knowledge Graph Completion (2403.04521v2)

Published 7 Mar 2024 in cs.CL

Abstract: Few-shot knowledge graph completion (FKGC) aims to query the unseen facts of a relation given its few-shot reference entity pairs. The side effect of noises due to the uncertainty of entities and triples may limit the few-shot learning, but existing FKGC works neglect such uncertainty, which leads them more susceptible to limited reference samples with noises. In this paper, we propose a novel uncertainty-aware few-shot KG completion framework (UFKGC) to model uncertainty for a better understanding of the limited data by learning representations under Gaussian distribution. Uncertainty representation is first designed for estimating the uncertainty scope of the entity pairs after transferring feature representations into a Gaussian distribution. Further, to better integrate the neighbors with uncertainty characteristics for entity features, we design an uncertainty-aware relational graph neural network (UR-GNN) to conduct convolution operations between the Gaussian distributions. Then, multiple random samplings are conducted for reference triples within the Gaussian distribution to generate smooth reference representations during the optimization. The final completion score for each query instance is measured by the designed uncertainty optimization to make our approach more robust to the noises in few-shot scenarios. Experimental results show that our approach achieves excellent performance on two benchmark datasets compared to its competitors.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. T. E. Kim and A. Lipani, “A multi-task based neural model to simulate users in goal oriented dialogue systems,” in SIGIR, 2022, pp. 2115–2119.
  2. W. Sun, S. Zhang, K. Balog, Z. Ren, P. Ren, Z. Chen, and M. de Rijke, “Simulating user satisfaction for the evaluation of task-oriented dialogue systems,” in SIGIR, 2021, pp. 2499–2506.
  3. H. He, A. Balakrishnan, M. Eric, and P. Liang, “Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings,” in ACL, 2017, pp. 1766–1776.
  4. E. J. Gerritse, F. Hasibi, and A. P. de Vries, “Entity-aware transformers for entity search,” in SIGIR, 2022, pp. 1455–1465.
  5. T. Komamizu, “Random walk-based entity representation learning and re-ranking for entity search,” KAIS, vol. 62, no. 8, pp. 2989–3013, 2020.
  6. J. Shen, J. Xiao, X. He, J. Shang, S. Sinha, and J. Han, “Entity set search of scientific literature: An unsupervised ranking approach,” in SIGIR, 2018, pp. 565–574.
  7. Y. Du, X. Zhu, L. Chen, B. Zheng, and Y. Gao, “HAKG: hierarchy-aware knowledge gated network for recommendation,” in SIGIR, 2022, pp. 1390–1400.
  8. S. Wang, Q. Zhang, L. Hu, X. Zhang, Y. Wang, and C. Aggarwal, “Sequential/session-based recommendations: Challenges, approaches, applications and opportunities,” in SIGIR, 2022, pp. 3425–3428.
  9. X. Chen, N. Zhang, L. Li, S. Deng, C. Tan, C. Xu, F. Huang, L. Si, and H. Chen, “Hybrid transformer with multi-level fusion for multimodal knowledge graph completion,” in SIGIR, 2022, pp. 904–915.
  10. R. Song, S. He, S. Zheng, S. Gao, K. Liu, Z. Yu, and J. Zhao, “Decoupling mixture-of-graphs: Unseen relational learning for knowledge graph completion by fusing ontology and textual experts,” in COLING, 2022, pp. 2237–2246.
  11. Y. Zhang, H. Liang, A. Jatowt, W. Lei, X. Wei, N. Jiang, and Z. Yang, “GMH: A general multi-hop reasoning model for KG completion,” in EMNLP, 2021, pp. 3437–3446.
  12. G. Niu, Y. Li, C. Tang, R. Geng, J. Dai, Q. Liu, H. Wang, J. Sun, F. Huang, and L. Si, “Relational learning with gated and attentive neighbor aggregator for few-shot knowledge graph completion,” in SIGIR, 2021, pp. 213–222.
  13. Z. Wang, K. P. Lai, P. Li, L. Bing, and W. Lam, “Tackling long-tailed relations and uncommon entities in knowledge graph completion,” in EMNLP, 2019, pp. 250–260.
  14. W. Xiong, M. Yu, S. Chang, X. Guo, and W. Y. Wang, “One-shot relational learning for knowledge graphs,” in EMNLP, 2018.
  15. C. Zhang, H. Yao, C. Huang, M. Jiang, Z. Li, and N. V. Chawla, “Few-shot knowledge graph completion,” in AAAI, 2020, pp. 3041–3048.
  16. J. Sheng, S. Guo, Z. Chen, J. Yue, L. Wang, T. Liu, and H. Xu, “Adaptive attentional network for few-shot knowledge graph completion,” in EMNLP, 2020, pp. 1681–1691.
  17. J. Xu, J. Zhang, X. Ke, Y. Dong, H. Chen, C. Li, and Y. Liu, “P-INT: A path-based interaction model for few-shot knowledge graph completion,” in ACL Finding, 2021, pp. 385–394.
  18. M. Chen, W. Zhang, W. Zhang, Q. Chen, and H. Chen, “Meta relational learning for few-shot link prediction in knowledge graphs,” in EMNLP, 2019, pp. 4217–4226.
  19. Z. Jiang, J. Gao, and X. Lv, “Metap: Meta pattern learning for one-shot knowledge graph completion,” in SIGIR.   ACM, 2021, pp. 2232–2236.
  20. Y. Li, K. Yu, Y. Zhang, and X. Wu, “Learning relation-specific representations for few-shot knowledge graph completion,” CoRR, vol. abs/2203.11639, 2022.
  21. S. Wang, X. Huang, C. Chen, L. Wu, and J. Li, “REFORM: error-aware few-shot knowledge graph completion,” in CIKM, 2021, pp. 1979–1988.
  22. Y. Liang, S. Zhao, B. Cheng, Y. Yin, and H. Yang, “Tackling solitary entities for few-shot knowledge graph completion,” in KSEM, ser. Lecture Notes in Computer Science, vol. 13368, 2022, pp. 227–239.
  23. X. Lv, Y. Gu, X. Han, L. Hou, J. Li, and Z. Liu, “Adapting meta knowledge graph information for multi-hop reasoning over few-shot relations,” in EMNLP, 2019, pp. 3374–3379.
  24. Z. Du, C. Zhou, J. Yao, T. Tu, L. Cheng, H. Yang, J. Zhou, and J. Tang, “Cogkr: Cognitive graph for multi-hop knowledge reasoning,” TKDE, 2021.
  25. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” in ICLR, 2014.
  26. L. Vilnis and A. McCallum, “Word representations via gaussian embedding,” in ICLR, 2015.
  27. M. Boudiaf, I. M. Ziko, J. Rony, J. Dolz, P. Piantanida, and I. B. Ayed, “Information maximization for few-shot learning,” in NeurIPS, 2020.
  28. S. He, K. Liu, G. Ji, and J. Zhao, “Learning to represent knowledge graphs with gaussian embedding,” in CIKM, 2015, pp. 623–632.
  29. J. Zhang, T. Wu, and G. Qi, “Gaussian metric learning for few-shot uncertain knowledge graph completion,” in DASFAA, vol. 12681, 2021, pp. 256–271.
  30. J. Wang, T. Wu, and J. Zhang, “Incorporating uncertainty of entities and relations into few-shot uncertain knowledge graph embedding,” in CCKS, 2022, pp. 16–28.
  31. Z. Wang, K. P. Lai, P. Li, L. Bing, and W. Lam, “Tackling long-tailed relations and uncommon entities in knowledge graph completion,” arXiv preprint arXiv:1909.11359, 2019.
  32. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in NeurIPS, 2017, pp. 5998–6008.
  33. Z. Shen, J. Liu, Y. He, X. Zhang, R. Xu, H. Yu, and P. Cui, “Towards out-of-distribution generalization: A survey,” CoRR, vol. abs/2108.13624, 2021.
  34. Y. Wang, X. Pan, S. Song, H. Zhang, G. Huang, and C. Wu, “Implicit semantic data augmentation for deep networks,” in NeurIPS, 2019, pp. 12 614–12 623.
  35. M. S. Schlichtkrull, T. N. Kipf, P. Bloem, R. van den Berg, I. Titov, and M. Welling, “Modeling relational data with graph convolutional networks,” in ESWC, vol. 10843, 2018, pp. 593–607.
  36. D. Zhu, Z. Zhang, P. Cui, and W. Zhu, “Robust graph convolutional networks against adversarial attacks,” in KDD, 2019, pp. 1399–1407.
  37. L. LeCam, “On the distribution of sums of independent random variables,” in Bernoulli 1713, Bayes 1763, Laplace 1813: Anniversary Volume. Proceedings of an International Research Seminar Statistical Laboratory University of California, 1965, pp. 179–202.
  38. N. Houlsby, F. Huszar, Z. Ghahramani, and M. Lengyel, “Bayesian active learning for classification and preference learning,” CoRR, vol. abs/1112.5745, 2011.
  39. S. Kullback and R. A. Leibler, “On information and sufficiency,” The annals of mathematical statistics, vol. 22, no. 1, pp. 79–86, 1951.
  40. T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, B. Yang, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel et al., “Never-ending learning,” Communications of the ACM, vol. 61, no. 5, pp. 103–115, 2018.
  41. D. Vrandečić and M. Krötzsch, “Wikidata: a free collaborative knowledgebase,” Communications of the ACM, vol. 57, no. 10, pp. 78–85, 2014.
  42. A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko, “Translating embeddings for modeling multi-relational data,” NeurIPS, vol. 26, 2013.
  43. B. Yang, W. Yih, X. He, J. Gao, and L. Deng, “Embedding entities and relations for learning and inference in knowledge bases,” in ICLR, 2015.
  44. Z. Sun, Z. Deng, J. Nie, and J. Tang, “Rotate: Knowledge graph embedding by relational rotation in complex space,” in ICLR, 2019.
  45. I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” in ICLR, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Qian Li (236 papers)
  2. Shu Guo (39 papers)
  3. Cheng Ji (40 papers)
  4. Jiawei Sheng (27 papers)
  5. Jianxin Li (128 papers)
  6. Yinjia Chen (1 paper)