Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Deep Dive into the Trade-Offs of Parameter-Efficient Preference Alignment Techniques (2406.04879v1)

Published 7 Jun 2024 in cs.CL

Abstract: LLMs are first pre-trained on trillions of tokens and then instruction-tuned or aligned to specific preferences. While pre-training remains out of reach for most researchers due to the compute required, fine-tuning has become affordable thanks to parameter-efficient methods such as LoRA and QLoRA. Alignment is known to be sensitive to the many factors involved, including the quantity and quality of data, the alignment method, and the adapter rank. However, there has not yet been an extensive study of their effect on downstream performance. To address this gap, we conduct an in-depth investigation of the impact of popular choices for three crucial axes: (i) the alignment dataset (HH-RLHF and BeaverTails), (ii) the alignment technique (SFT and DPO), and (iii) the model (LLaMA-1, Vicuna-v1.3, Mistral-7b, and Mistral-7b-Instruct). Our extensive setup spanning over 300 experiments reveals consistent trends and unexpected findings. We observe how more informative data helps with preference alignment, cases where supervised fine-tuning outperforms preference optimization, and how aligning to a distinct preference boosts performance on downstream tasks. Through our in-depth analyses, we put forward key guidelines to help researchers perform more effective parameter-efficient LLM alignment.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Megh Thakkar (12 papers)
  2. Quentin Fournier (14 papers)
  3. Matthew D Riemer (4 papers)
  4. Pin-Yu Chen (311 papers)
  5. Amal Zouaq (15 papers)
  6. Payel Das (104 papers)
  7. Sarath Chandar (93 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets