Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model Merging and Safety Alignment: One Bad Model Spoils the Bunch (2406.14563v1)

Published 20 Jun 2024 in cs.CL, cs.AI, and cs.LG

Abstract: Merging LLMs is a cost-effective technique for combining multiple expert LLMs into a single versatile model, retaining the expertise of the original ones. However, current approaches often overlook the importance of safety alignment during merging, leading to highly misaligned models. This work investigates the effects of model merging on alignment. We evaluate several popular model merging techniques, demonstrating that existing methods do not only transfer domain expertise but also propagate misalignment. We propose a simple two-step approach to address this problem: (i) generating synthetic safety and domain-specific data, and (ii) incorporating these generated data into the optimization process of existing data-aware model merging techniques. This allows us to treat alignment as a skill that can be maximized in the resulting merged LLM. Our experiments illustrate the effectiveness of integrating alignment-related data during merging, resulting in models that excel in both domain expertise and alignment.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hasan Abed Al Kader Hammoud (20 papers)
  2. Umberto Michieli (40 papers)
  3. Fabio Pizzati (22 papers)
  4. Philip Torr (172 papers)
  5. Adel Bibi (53 papers)
  6. Bernard Ghanem (256 papers)
  7. Mete Ozay (65 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.