Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modelling Commonsense Properties using Pre-Trained Bi-Encoders (2210.02771v1)

Published 6 Oct 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Grasping the commonsense properties of everyday concepts is an important prerequisite to language understanding. While contextualised LLMs are reportedly capable of predicting such commonsense properties with human-level accuracy, we argue that such results have been inflated because of the high similarity between training and test concepts. This means that models which capture concept similarity can perform well, even if they do not capture any knowledge of the commonsense properties themselves. In settings where there is no overlap between the properties that are considered during training and testing, we find that the empirical performance of standard LLMs drops dramatically. To address this, we study the possibility of fine-tuning LLMs to explicitly model concepts and their properties. In particular, we train separate concept and property encoders on two types of readily available data: extracted hyponym-hypernym pairs and generic sentences. Our experimental results show that the resulting encoders allow us to predict commonsense properties with much higher accuracy than is possible by directly fine-tuning LLMs. We also present experimental results for the related task of unsupervised hypernym discovery.

Citations (12)

Summary

We haven't generated a summary for this paper yet.