Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Fine-Tuning of Compressed Language Models with Learners (2208.02070v1)

Published 3 Aug 2022 in cs.CL and cs.LG

Abstract: Fine-tuning BERT-based models is resource-intensive in memory, computation, and time. While many prior works aim to improve inference efficiency via compression techniques, e.g., pruning, these works do not explicitly address the computational challenges of training to downstream tasks. We introduce Learner modules and priming, novel methods for fine-tuning that exploit the overparameterization of pre-trained LLMs to gain benefits in convergence speed and resource utilization. Learner modules navigate the double bind of 1) training efficiently by fine-tuning a subset of parameters, and 2) training effectively by ensuring quick convergence and high metric scores. Our results on DistilBERT demonstrate that learners perform on par with or surpass the baselines. Learners train 7x fewer parameters than state-of-the-art methods on GLUE. On CoLA, learners fine-tune 20% faster, and have significantly lower resource utilization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Danilo Vucetic (5 papers)
  2. Mohammadreza Tayaranian (6 papers)
  3. Maryam Ziaeefard (3 papers)
  4. James J. Clark (32 papers)
  5. Brett H. Meyer (11 papers)
  6. Warren J. Gross (75 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.