Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Merge, Ensemble, and Cooperate! A Survey on Collaborative Strategies in the Era of Large Language Models (2407.06089v1)

Published 8 Jul 2024 in cs.CL

Abstract: The remarkable success of LLMs has ushered NLP research into a new era. Despite their diverse capabilities, LLMs trained on different corpora exhibit varying strengths and weaknesses, leading to challenges in maximizing their overall efficiency and versatility. To address these challenges, recent studies have explored collaborative strategies for LLMs. This paper provides a comprehensive overview of this emerging research area, highlighting the motivation behind such collaborations. Specifically, we categorize collaborative strategies into three primary approaches: Merging, Ensemble, and Cooperation. Merging involves integrating multiple LLMs in the parameter space. Ensemble combines the outputs of various LLMs. Cooperation} leverages different LLMs to allow full play to their diverse capabilities for specific tasks. We provide in-depth introductions to these methods from different perspectives and discuss their potential applications. Additionally, we outline future research directions, hoping this work will catalyze further studies on LLM collaborations and paving the way for advanced NLP applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jinliang Lu (8 papers)
  2. Ziliang Pang (1 paper)
  3. Min Xiao (103 papers)
  4. Yaochen Zhu (23 papers)
  5. Rui Xia (53 papers)
  6. Jiajun Zhang (176 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com