Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FedLLM-Bench: Realistic Benchmarks for Federated Learning of Large Language Models (2406.04845v1)

Published 7 Jun 2024 in cs.CL, cs.AI, cs.DC, cs.LG, and cs.MA

Abstract: Federated learning has enabled multiple parties to collaboratively train LLMs without directly sharing their data (FedLLM). Following this training paradigm, the community has put massive efforts from diverse aspects including framework, performance, and privacy. However, an unpleasant fact is that there are currently no realistic datasets and benchmarks for FedLLM and previous works all rely on artificially constructed datasets, failing to capture properties in real-world scenarios. Addressing this, we propose FedLLM-Bench, which involves 8 training methods, 4 training datasets, and 6 evaluation metrics, to offer a comprehensive testbed for the FedLLM community. FedLLM-Bench encompasses three datasets (e.g., user-annotated multilingual dataset) for federated instruction tuning and one dataset (e.g., user-annotated preference dataset) for federated preference alignment, whose scale of client number ranges from 38 to 747. Our datasets incorporate several representative diversities: language, quality, quantity, instruction, length, embedding, and preference, capturing properties in real-world scenarios. Based on FedLLM-Bench, we conduct experiments on all datasets to benchmark existing FL methods and provide empirical insights (e.g., multilingual collaboration). We believe that our FedLLM-Bench can benefit the FedLLM community by reducing required efforts, providing a practical testbed, and promoting fair comparisons. Code and datasets are available at https://github.com/rui-ye/FedLLM-Bench.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Rui Ye (42 papers)
  2. Rui Ge (18 papers)
  3. Xinyu Zhu (28 papers)
  4. Jingyi Chai (10 papers)
  5. Yaxin Du (10 papers)
  6. Yang Liu (2253 papers)
  7. Yanfeng Wang (211 papers)
  8. Siheng Chen (152 papers)
Citations (7)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub