Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing Commonsense Emergence in Few-shot Knowledge Models (2101.00297v3)

Published 1 Jan 2021 in cs.CL

Abstract: Recently, commonsense knowledge models - pretrained LLMs (LM) fine-tuned on knowledge graph (KG) tuples - showed that considerable amounts of commonsense knowledge can be encoded in the parameters of LLMs. However, as parallel studies show that LMs are poor hypothesizers of declarative commonsense relationships on their own, it remains unclear whether this knowledge is learned during pretraining or from fine-tuning on KG examples. To investigate this question, we train commonsense knowledge models in few-shot settings to study the emergence of their commonsense representation abilities. Our results show that commonsense knowledge models can rapidly adapt from limited examples, indicating that KG fine-tuning serves to learn an interface to encoded knowledge learned during pretraining. Importantly, our analysis of absolute, angular, and distributional parameter changes during few-shot fine-tuning provides novel insights into how this interface is learned.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jeff Da (10 papers)
  2. Ronan Le Bras (56 papers)
  3. Ximing Lu (52 papers)
  4. Yejin Choi (287 papers)
  5. Antoine Bosselut (85 papers)
Citations (38)