Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cool-Fusion: Fuse Large Language Models without Training (2407.19807v1)

Published 29 Jul 2024 in cs.CL

Abstract: We focus on the problem of fusing two or more heterogeneous LLMs to facilitate their complementary strengths. One of the challenges on model fusion is high computational load, i.e. to fine-tune or to align vocabularies via combinatorial optimization. To this end, we propose \emph{Cool-Fusion}, a simple yet effective approach that fuses the knowledge of heterogeneous source LLMs to leverage their complementary strengths. \emph{Cool-Fusion} is the first method that does not require any type of training like the ensemble approaches. But unlike ensemble methods, it is applicable to any set of source LLMs that have different vocabularies. The basic idea is to have each source LLM individually generate tokens until the tokens can be decoded into a text segment that ends at word boundaries common to all source LLMs. Then, the source LLMs jointly rerank the generated text segment and select the best one, which is the fused text generation in one step. Extensive experiments are conducted across a variety of benchmark datasets. On \emph{GSM8K}, \emph{Cool-Fusion} increases accuracy from three strong source LLMs by a significant 8\%-17.8\%.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Cong Liu (169 papers)
  2. Xiaojun Quan (52 papers)
  3. Yan Pan (48 papers)
  4. Liang Lin (318 papers)
  5. Weigang Wu (11 papers)
  6. Xu Chen (413 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets