Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation (2204.07674v1)

Published 15 Apr 2022 in cs.CL

Abstract: Knowledge distillation (KD) is an efficient framework for compressing large-scale pre-trained LLMs. Recent years have seen a surge of research aiming to improve KD by leveraging Contrastive Learning, Intermediate Layer Distillation, Data Augmentation, and Adversarial Training. In this work, we propose a learning based data augmentation technique tailored for knowledge distillation, called CILDA. To the best of our knowledge, this is the first time that intermediate layer representations of the main task are used in improving the quality of augmented samples. More precisely, we introduce an augmentation technique for KD based on intermediate layer matching using contrastive loss to improve masked adversarial data augmentation. CILDA outperforms existing state-of-the-art KD approaches on the GLUE benchmark, as well as in an out-of-domain evaluation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mehdi Rezagholizadeh (78 papers)
  2. Abbas Ghaddar (18 papers)
  3. Khalil Bibi (6 papers)
  4. Philippe Langlais (23 papers)
  5. Pascal Poupart (80 papers)
  6. Md Akmal Haidar (6 papers)
Citations (6)