Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation (2212.09246v3)

Published 19 Dec 2022 in cs.CL

Abstract: Commonsense capabilities of pre-trained LLMs dramatically improve with scale, leading many to believe that scale is the only winning recipe. But is it? Here, we investigate an alternative that a priori seems impossible: can smaller LLMs (e.g., GPT-2) win over models that are orders of magnitude larger and better (e.g., GPT-3), if powered with novel commonsense distillation algorithms? The key intellectual challenge is to design a learning algorithm that achieve a competitive level of commonsense acquisition, without relying on the benefits of scale. In particular, we study generative models of commonsense knowledge, focusing on the task of generating generics, statements of commonsense facts about everyday concepts, e.g., birds can fly. We introduce I2D2, a novel commonsense distillation framework that loosely follows the Symbolic Knowledge Distillation of West et al. but breaks the dependence on the extreme-scale teacher model with two innovations: (1) the novel adaptation of NeuroLogic Decoding to enhance the generation quality of the weak, off-the-shelf LLMs, and (2) self-imitation learning to iteratively learn from the model's own enhanced commonsense acquisition capabilities. Empirical results suggest that scale is not the only way, as novel algorithms can be a promising alternative. Moreover, our study leads to a new corpus of generics, Gen-A-tomic, that is the largest and highest quality available to date.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Chandra Bhagavatula (46 papers)
  2. Jena D. Hwang (36 papers)
  3. Doug Downey (50 papers)
  4. Ronan Le Bras (56 papers)
  5. Ximing Lu (52 papers)
  6. Lianhui Qin (35 papers)
  7. Keisuke Sakaguchi (44 papers)
  8. Swabha Swayamdipta (49 papers)
  9. Peter West (76 papers)
  10. Yejin Choi (287 papers)
Citations (32)