Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation (2002.10345v1)

Published 24 Feb 2020 in cs.CL and cs.LG

Abstract: Fine-tuning pre-trained LLMs like BERT has become an effective way in NLP and yields state-of-the-art results on many downstream tasks. Recent studies on adapting BERT to new tasks mainly focus on modifying the model structure, re-designing the pre-train tasks, and leveraging external data and knowledge. The fine-tuning strategy itself has yet to be fully explored. In this paper, we improve the fine-tuning of BERT with two effective mechanisms: self-ensemble and self-distillation. The experiments on text classification and natural language inference tasks show our proposed methods can significantly improve the adaption of BERT without any external data or knowledge.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yige Xu (9 papers)
  2. Xipeng Qiu (257 papers)
  3. Ligao Zhou (1 paper)
  4. Xuanjing Huang (287 papers)
Citations (60)