Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MATE-KD: Masked Adversarial TExt, a Companion to Knowledge Distillation (2105.05912v1)

Published 12 May 2021 in cs.CL and cs.LG

Abstract: The advent of large pre-trained LLMs has given rise to rapid progress in the field of NLP. While the performance of these models on standard benchmarks has scaled with size, compression techniques such as knowledge distillation have been key in making them practical. We present, MATE-KD, a novel text-based adversarial training algorithm which improves the performance of knowledge distillation. MATE-KD first trains a masked LLM based generator to perturb text by maximizing the divergence between teacher and student logits. Then using knowledge distillation a student is trained on both the original and the perturbed training samples. We evaluate our algorithm, using BERT-based models, on the GLUE benchmark and demonstrate that MATE-KD outperforms competitive adversarial learning and data augmentation baselines. On the GLUE test set our 6 layer RoBERTa based model outperforms BERT-Large.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ahmad Rashid (24 papers)
  2. Vasileios Lioutas (16 papers)
  3. Mehdi Rezagholizadeh (78 papers)
Citations (34)