Modelling Commonsense Properties using Pre-Trained Bi-Encoders (2210.02771v1)
Abstract: Grasping the commonsense properties of everyday concepts is an important prerequisite to language understanding. While contextualised LLMs are reportedly capable of predicting such commonsense properties with human-level accuracy, we argue that such results have been inflated because of the high similarity between training and test concepts. This means that models which capture concept similarity can perform well, even if they do not capture any knowledge of the commonsense properties themselves. In settings where there is no overlap between the properties that are considered during training and testing, we find that the empirical performance of standard LLMs drops dramatically. To address this, we study the possibility of fine-tuning LLMs to explicitly model concepts and their properties. In particular, we train separate concept and property encoders on two types of readily available data: extracted hyponym-hypernym pairs and generic sentences. Our experimental results show that the resulting encoders allow us to predict commonsense properties with much higher accuracy than is possible by directly fine-tuning LLMs. We also present experimental results for the related task of unsupervised hypernym discovery.