Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Corrective Self-Distillation for Better Fine-Tuning of Pretrained Models (2312.07028v1)

Published 12 Dec 2023 in cs.CL and cs.AI

Abstract: We tackle the challenging issue of aggressive fine-tuning encountered during the process of transfer learning of pre-trained LLMs (PLMs) with limited labeled downstream data. This problem primarily results in a decline in performance on the subsequent task. Inspired by the adaptive boosting method in traditional machine learning, we present an effective dynamic corrective self-distillation (DCS) approach to improve the fine-tuning of the PLMs. Our technique involves performing a self-distillation mechanism where, at each iteration, the student model actively adapts and corrects itself by dynamically adjusting the weights assigned to individual data points. This iterative self-correcting process significantly enhances the overall fine-tuning capability of PLMs, leading to improved performance and robustness. We conducted comprehensive evaluations using the GLUE benchmark demonstrating the efficacy of our method in enhancing the fine-tuning process for various PLMs across diverse downstream tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ibtihel Amara (6 papers)
  2. Vinija Jain (43 papers)
  3. Aman Chadha (110 papers)

Summary

We haven't generated a summary for this paper yet.