There is No Big Brother or Small Brother: Knowledge Infusion in Language Models for Link Prediction and Question Answering (2301.04013v1)
Abstract: The integration of knowledge graphs with deep learning is thriving in improving the performance of various NLP tasks. In this paper, we focus on knowledge-infused link prediction and question answering using LLMs, T5, and BLOOM across three domains: Aviation, Movie, and Web. In this context, we infuse knowledge in large and small LLMs and study their performance, and find the performance to be similar. For the link prediction task on the Aviation Knowledge Graph, we obtain a 0.2 hits@1 score using T5-small, T5-base, T5-large, and BLOOM. Using template-based scripts, we create a set of 1 million synthetic factoid QA pairs in the aviation domain from National Transportation Safety Board (NTSB) reports. On our curated QA pairs, the three models of T5 achieve a 0.7 hits@1 score. We validate out findings with the paired student t-test and Cohen's kappa scores. For link prediction on Aviation Knowledge Graph using T5-small and T5-large, we obtain a Cohen's kappa score of 0.76, showing substantial agreement between the models. Thus, we infer that small LLMs perform similar to LLMs with the infusion of knowledge.
- Ankush Agarwal (17 papers)
- Sakharam Gawade (3 papers)
- Sachin Channabasavarajendra (1 paper)
- Pushpak Bhattacharyya (153 papers)