Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Integration of Large Language Models and Federated Learning (2307.08925v3)

Published 18 Jul 2023 in cs.LG, cs.AI, and cs.CL

Abstract: As the parameter size of LLMs continues to expand, there is an urgent need to address the scarcity of high-quality data. In response, existing research has attempted to make a breakthrough by incorporating Federated Learning (FL) into LLMs. Conversely, considering the outstanding performance of LLMs in task generalization, researchers have also tried applying LLMs within FL to tackle challenges in relevant domains. The complementarity between LLMs and FL has already ignited widespread research interest. In this paper, we aim to deeply explore the integration of LLMs and FL. We propose a research framework, dividing the fusion of LLMs and FL into three parts: the combination of LLM sub-technologies with FL, the integration of FL sub-technologies with LLMs, and the overall merger of LLMs and FL. We first provide a comprehensive review of the current state of research in the domain of LLMs combined with FL, including their typical applications, integration advantages, challenges faced, and future directions for resolution. Subsequently, we discuss the practical applications of the combination of LLMs and FL in critical scenarios such as healthcare, finance, and education, and provide new perspectives and insights into future research directions for LLMs and FL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chaochao Chen (87 papers)
  2. Xiaohua Feng (15 papers)
  3. Jun Zhou (370 papers)
  4. Jianwei Yin (71 papers)
  5. Xiaolin Zheng (52 papers)
  6. Yuyuan Li (24 papers)
  7. Lingjuan Lyu (131 papers)
Citations (38)