HierFAVG: Hierarchical Federated Averaging
- HierFAVG is a hierarchical federated learning algorithm that leverages a three-tier client–edge–cloud structure to balance computation and communication trade-offs.
- The algorithm employs local SGD updates along with intermediate edge and global cloud aggregations, ensuring theoretical convergence while mitigating model drift.
- Empirical results on MNIST and CIFAR-10 show up to 3–4× faster training and a 30% reduction in device energy consumption compared to standard FedAvg.
HierFAVG (Hierarchical Federated Averaging) is a multi-tier federated learning algorithm designed to efficiently train machine learning models across decentralized datasets distributed over a client–edge–cloud architecture. Unlike classical FedAvg, which uses a single parameter server (typically in the cloud or at the edge), HierFAVG introduces an intermediate aggregation layer at the edge servers between clients and a central cloud server. This hierarchy enables partial aggregation at multiple levels, providing favorable computation–communication trade-offs and significant improvements in training speed and device energy efficiency while maintaining strong theoretical convergence guarantees (Liu et al., 2019, Yang et al., 2022).
1. Hierarchical Architecture and Optimization Problem
HierFAVG operates in a three-tier network comprising clients (also called workers), edge servers, and a cloud server. Each client in edge cluster holds a private dataset ; edge aggregates data from its assigned clients ; the cloud aggregates across all edge servers. The global learning objective is to minimize the loss
where denotes the union of all client datasets. Data may be statistically non-IID at both the worker and edge levels, quantified via client–edge gradient divergence and edge–cloud divergence .
The hierarchical structure allows adaptation to realistic network and data scenarios, where direct client–cloud communication is costly or infeasible, and edge resources can be leveraged for intermediate aggregation (Liu et al., 2019, Yang et al., 2022).
2. Algorithmic Workflow and Model Update Rules
HierFAVG proceeds in rounds, alternating between local SGD updates at clients, periodic client–edge aggregations, and less frequent global (edge–cloud) aggregations. Core parameters include:
- : learning rate
- (or ): number of local SGD steps between edge aggregations
- (or ): number of edge aggregations between cloud synchronizations
Let denote client ’s model at step , the edge model, and the cloud/global model. The update rules are:
- Local SGD at Clients:
- Edge Aggregation (every steps):
- Cloud Aggregation (every steps):
After aggregation, the new aggregate is broadcast back: edges to clients and the cloud to all edges and clients.
The algorithm cycles through these steps for total local SGD updates, with suitable initialization and synchronization at each level (Liu et al., 2019, Yang et al., 2022).
3. Mathematical Formulation and Model Drift
Between successive cloud aggregations, the local client update is simply SGD. Edge and cloud aggregation events replace all participating models with the respective weighted averages. This structure induces hierarchical model drift owing to updates on increasingly stale local parameters.
The full update at a cloud synchronization can be expressed as: whenever .
The hierarchical approach introduces two main sources of divergence:
- Client–edge divergence : measures statistical heterogeneity between a client and its edge-level aggregate.
- Edge–cloud divergence : measures statistical heterogeneity between an edge and the global average.
These divergences, along with and , determine how far models can drift from global optima during intermediate local and edge phases (Liu et al., 2019, Yang et al., 2022).
4. Convergence Results and Theoretical Analysis
HierFAVG provides rigorous convergence guarantees under standard smoothness and bounded-divergence assumptions. For convex objectives, the deviation from the optimum is bounded by (Liu et al., 2019): where quantifies hierarchical drift: with .
In the non-convex setting, HierFAVG ensures that the time-averaged squared gradient norm converges to a neighborhood of zero, with the radius determined by cumulative drift.
A closely related independent analysis confirms a sublinear rate (for total iterations ), plus a heterogeneity penalty that increases with the length of local and edge intervals (Yang et al., 2022).
5. Communication–Computation Trade-Offs and Parameter Tuning
HierFAVG’s hierarchical structure allows explicit tuning of and (or ) to balance localized computation against the overhead of communication at each tier:
- Smaller (frequent edge aggregation): Reduces model drift and accelerates convergence but increases communication between clients and edges.
- Larger (infrequent cloud aggregation): Reduces global communication but can increase overall drift, especially if edge datasets are non-IID.
With edge-level data being IID (), increasing does not degrade convergence, enabling significant communication savings.
Design guidelines recommend small where client–edge communication is cheap, large when edge–cloud communication is expensive and edges see homogeneous data, and adaptive tuning of both in heterogeneous and resource-constrained environments. Diminishing step sizes are advised for asymptotic optimality in convex problems (Liu et al., 2019, Yang et al., 2022).
6. Empirical Performance and Limitations
Empirical experiments using CNNs on MNIST and CIFAR-10, with various non-IID data configurations, confirm the theoretical findings:
- Training speed: Wall-clock time to a fixed target accuracy is reduced by 3–4× compared to cloud-only FedAvg for both datasets (e.g., from s to s for CIFAR-10 at accuracy).
- Device energy: End-device energy consumption is cut by up to for MNIST due to reduced uplink bandwidth and more efficient computation schedules.
- Parameter sensitivity: Reducing accelerates convergence; increasing is safe only when edge-level data are homogeneous.
- Heterogeneity sensitivity: Model drift and convergence slow dramatically if or are too large in highly non-IID cases.
HierFAVG’s main limitations are heightened sensitivity to large aggregation periods in the presence of data heterogeneity and delayed global alignment due to infrequent cloud aggregations. These limitations motivate newer variants, such as HierMo, which layer momentum on top of the HierFAVG baseline for provably tighter convergence bounds (Yang et al., 2022).
7. Extensions and Comparative Remarks
HierFAVG represents the archetype of multi-tier model-averaging in federated learning. Its simplicity enables easy analysis and practical deployment in heterogeneous networks, but leaves open the challenge of mitigating model drift in highly non-IID settings or under infrequent synchronization.
Recent work demonstrates that injecting momentum at either or both the worker and edge tiers (as in HierMo) yields strictly superior convergence rates, especially for deep or nonconvex models, by reducing oscillation and steady-state error due to drift. Optimization of aggregation periods, as in HierOPT, further refines the computation-communication trade-off.
A plausible implication is that the design and tuning of multi-tier FL algorithms in realistic networks should increasingly incorporate drift-mitigation techniques (such as momentum, adaptive periods, or personalized models) as network and data heterogeneity intensifies (Yang et al., 2022, Liu et al., 2019).