Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning (2205.05638v2)

Published 11 May 2022 in cs.LG, cs.AI, and cs.CL
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning

Abstract: Few-shot in-context learning (ICL) enables pre-trained LLMs to perform a previously-unseen task without any gradient-based training by feeding a small number of training examples as part of the input. ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made. Parameter-efficient fine-tuning (PEFT) (e.g. adapter modules, prompt tuning, sparse update methods, etc.) offers an alternative paradigm where a small set of parameters are trained to enable a model to perform the new task. In this paper, we rigorously compare few-shot ICL and PEFT and demonstrate that the latter offers better accuracy as well as dramatically lower computational costs. Along the way, we introduce a new PEFT method called (IA)$3$ that scales activations by learned vectors, attaining stronger performance while only introducing a relatively tiny amount of new parameters. We also propose a simple recipe based on the T0 model called T-Few that can be applied to new tasks without task-specific tuning or modifications. We validate the effectiveness of T-Few on completely unseen tasks by applying it to the RAFT benchmark, attaining super-human performance for the first time and outperforming the state-of-the-art by 6% absolute. All of the code used in our experiments is publicly available.

Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning

The paper "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning" provides a comprehensive analysis of parameter-efficient fine-tuning (PEFT) methods, establishing their effectiveness compared to in-context learning (ICL) for few-shot learning tasks in LLMs.

Key Contributions

The paper introduces a new PEFT method, dubbed "Infused Adapter by Inhibiting and Amplifying Inner Activations" (IA3), which applies element-wise multiplication to model activations using learned vectors. This approach remains compatible with mixed-task batches and incurs minimal computational overhead. The authors validate IA3's superior performance on various datasets compared to existing methods such as BitFit, Adapters, Compacter, and others.

Numerical Results and Analysis

The authors present compelling numerical evidence demonstrating that PEFT with IA3 yields higher accuracy over ICL, even when tested against state-of-the-art ICL models like GPT-3. For instance, IA3 achieves a 6% higher accuracy than GPT-3's largest variant while significantly reducing computational costs. The paper also benchmarks IA3 on the RAFT dataset, achieving super-human performance and outperforming existing methods by over 6%.

Methodological Insights

The paper emphasizes computational efficiency by estimating the FLOPs for each method, showing that IA3 offers more than a 1,000-fold reduction in inference FLOPs compared to GPT-3 ICL. The storage costs are also modest, requiring only a few megabytes for the additional parameters.

Furthermore, the authors introduce two auxiliary loss functions: unlikelihood loss and length-normalized cross-entropy loss, which enhance PEFT performance on classification tasks. These additions mitigate common issues with ICL, such as sensitivity to example order and prompt formatting.

Implications for Future Research

This paper not only establishes the superiority of PEFT over ICL but also opens avenues for further exploration in parameter-efficient methods within AI. The results suggest the potential applicability of IA3 across diverse tasks and model architectures. Future work may consider extending these techniques to generative tasks and exploring the integration of PEFT with small-scale models for broader accessibility.

In conclusion, the paper makes a significant contribution to the field by offering an efficient and effective alternative to traditional in-context learning methods for few-shot learning, providing insights into the practical considerations of adopting parameter-efficient architectures in LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Haokun Liu (26 papers)
  2. Derek Tam (10 papers)
  3. Mohammed Muqeeth (5 papers)
  4. Jay Mohta (1 paper)
  5. Tenghao Huang (13 papers)
  6. Mohit Bansal (304 papers)
  7. Colin Raffel (83 papers)
Citations (730)
Youtube Logo Streamline Icon: https://streamlinehq.com