Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Flower: A Friendly Federated Learning Research Framework (2007.14390v5)

Published 28 Jul 2020 in cs.LG, cs.CV, and stat.ML

Abstract: Federated Learning (FL) has emerged as a promising technique for edge devices to collaboratively learn a shared prediction model, while keeping their training data on the device, thereby decoupling the ability to do machine learning from the need to store the data in the cloud. However, FL is difficult to implement realistically, both in terms of scale and systems heterogeneity. Although there are a number of research frameworks available to simulate FL algorithms, they do not support the study of scalable FL workloads on heterogeneous edge devices. In this paper, we present Flower -- a comprehensive FL framework that distinguishes itself from existing platforms by offering new facilities to execute large-scale FL experiments and consider richly heterogeneous FL device scenarios. Our experiments show Flower can perform FL experiments up to 15M in client size using only a pair of high-end GPUs. Researchers can then seamlessly migrate experiments to real devices to examine other parts of the design space. We believe Flower provides the community with a critical new tool for FL study and development.

Citations (649)

Summary

  • The paper presents Flower as a scalable framework enabling federated learning experiments with up to 15 million clients using heterogeneous devices.
  • It employs a framework-agnostic design that supports various ML libraries and languages, enhancing flexibility across diverse FL pipelines.
  • Flower bridges simulation and real-world deployments by efficiently profiling FL operations on devices ranging from single machines to edge platforms.

Flower: A Federated Learning Framework

The paper "Flower: A Friendly Federated Learning Framework" introduces Flower, a comprehensive federated learning (FL) framework designed to facilitate large-scale FL experiments while accommodating the heterogeneity of edge devices. This document discusses the key features, implementation, and performance evaluation of Flower, emphasizing its utility for FL research and development.

Federated Learning has become increasingly significant in enabling edge devices to collaboratively train machine learning models without centralizing their data, thus preserving privacy. However, implementing FL at scale, particularly with diverse device capabilities, remains a complex challenge. Flower addresses these challenges by providing a scalable and adaptable platform, distinguishing it from existing frameworks that are often limited to rigid or small-scale simulations.

Key Contributions

  1. Scalability and Heterogeneity: Flower supports FL experiments with up to 15 million clients using limited hardware resources, such as just a pair of high-end GPUs. This capability is critical for modeling realistic FL scenarios that consider vast numbers of devices with diverse capabilities.
  2. Framework-Agnostic Approach: Flower is designed to be ML framework and language-agnostic. This allows researchers to implement FL pipelines without being constrained by the underlying ML framework or programming language, enhancing flexibility and ease of experimentation.
  3. Real-World Application: The framework facilitates the migration of simulations to real-world devices. This functionality enables experiments to be conducted under varied system conditions, including limited computational resources and fluctuating network capabilities.
  4. Open Source and Extensible: Flower is open-sourced under the Apache 2.0 License and has received contributions from both academia and industry, fostering a growing community that integrates new algorithms and functionalities.

Experimental Evaluation

The paper provides insights into the experimental evaluation of Flower, highlighting its effectiveness in various scenarios:

  • Single Machine Simulations: Flower’s virtual client engine efficiently manages resources, allowing for large-scale simulations on single or multi-node setups. It outperforms other frameworks by effectively utilizing hardware resources.
  • Heterogeneity in Devices: The ability to deploy Flower on a range of heterogeneous edge devices, such as Android smartphones and Nvidia Jetson series devices, is validated. The framework enables fine-grained profiling of FL operations, providing essential metrics for optimizing client selection strategies.
  • Scalability with Mega-Scale Datasets: Experiments illustrate Flower’s capacity to manage large datasets like ImageNet, demonstrating its potential for training complex models in federated settings.

Implications and Future Directions

The introduction of Flower significantly impacts both theoretical and practical aspects of FL research. It provides a robust platform for testing new algorithms and strategies in realistic settings, bridging the gap between theoretical research and practical implementation. By enabling experiments at unprecedented scales, Flower contributes to understanding the dynamics of FL in real-world applications.

Future developments in FL will likely leverage frameworks like Flower to explore advanced topics, such as secure aggregation protocols, adaptive client selection, and optimization under constraints of device heterogeneity. Additionally, Flower's extensible nature promises continuous integration of state-of-the-art algorithms, further advancing the capabilities of federated systems.

In conclusion, Flower represents a pivotal step toward realizing scalable federated learning. By addressing key challenges of scalability and device heterogeneity, it offers researchers and developers a powerful toolset for advancing FL technologies and applications.

Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com

HackerNews