Power-Controlled DP Decentralized Learning
- The paper introduces a joint optimization framework that balances transmit power allocation and privacy noise injection to meet differential privacy guarantees while enhancing model accuracy.
- It leverages over-the-air multicast aggregation and auxiliary compensation to effectively manage heterogeneous channel gains and unbalanced network topologies.
- Theoretical analysis demonstrates an O(log T) regret bound, with numerical experiments validating faster convergence and improved performance under varying privacy constraints.
A power-controlled differentially private decentralized learning algorithm is a collaborative machine learning protocol that simultaneously manages the allocation of transmit power and the calibration of privacy-inducing noise in a network of distributed clients communicating over heterogeneous wireless channels. Such algorithms are designed to maintain strong differential privacy guarantees for each client's local data, while adapting energy usage and maximizing overall model accuracy, especially in wireless environments with varying channel gains and multicast communication.
1. Integrated Protocol Design
The setting consists of clients, each with a local dataset, connected via wireless multicast channels characterized by heterogeneous and possibly time-varying channel gains (). Each client maintains a model parameter vector at iteration and participates in decentralized collaborative training. The protocol proceeds through the following components in every epoch:
- Local Update and Noise Injection: Each node computes a local stochastic gradient on its individual objective and performs a gradient descent step. Simultaneously, the node generates a Gaussian noise vector —with appropriately chosen variance—to be injected into its transmitted message for privacy protection.
- Power Splitting and Transmission Signal Construction: Each client is assigned a fixed total transmit power , which is split at each epoch into two fractions via allocation factors (signal) and (noise). The transmitted multicast signal is:
This split controls the relative "power" of the true model signal and the injected noise in the wireless channel.
- Wireless Multicast Aggregation: Due to the superposition properties of the wireless medium ("over-the-air" computation), each node receives a signal formed as a weighted sum of its neighbors' transmitted signals, with weights determined by the instantaneous channel gains and power allocations. The weights form the entries of a row-stochastic channel-dependent adjacency matrix .
- Decentralized Update with Network Compensation: The update at node involves aggregating the received (possibly noisy) messages from all neighbors,
where is projection onto the feasible domain, is a decay learning rate (e.g., ), is the local (possibly clipped) stochastic gradient, and is an auxiliary variable compensating for the imbalance introduced by the row-stochastic aggregation weights and tracking the left Perron eigenvector of .
2. Joint Power Allocation and Noise Control
The central innovation is the joint optimization of transmit power allocation (, ) and privacy noise variance () for each client and iteration. The design seeks to:
- Maximize utility: Allocate as much power as possible to the true model signal ( large), subject to a fixed total power .
- Guarantee privacy: Ensure that the injected noise ( portion and variance ) is sufficient to achieve a prescribed -differential privacy requirement for each client, given the instantaneous channel conditions.
The joint optimization is formalized as: where captures the privacy leakage on the link from to , explicitly depending on the actual power split factors, channel gains, learning rate, and noise variance.
3. Differential Privacy Mechanism in Wireless Multicast
The protocol uses the Gaussian mechanism for differential privacy, calibrated to account for the network and wireless channel:
- Sensitivity Evaluation: The privacy sensitivity is proportional to the product of the learning rate, the (possibly bounded) gradient, and the scaled channel gain:
where bounds the stochastic gradient norm, and is an auxiliary constant.
- Noise Calibration: The variance for the additive Gaussian noise is chosen so that the perturbed transmitted output on each link meets:
for specified privacy parameters .
- Multicast and Aggregation: The superposition and simultaneous transmission enable efficient communication, with all neighbors receiving the same noisy signal.
4. Multicast Network Model and Compensation for Unbalanced Topology
The network is represented by a directed graph with clients as nodes and edges determined by multicast reachability. The row-stochasticity of (due to the heterogeneity in and per-epoch power splits) results in possible imbalance. This is compensated by:
- Using an auxiliary matrix of variables to estimate the left Perron eigenvector of .
- Scaling the gradient step direction by the inverse of in the update rule, ensuring unbiased aggregation over the network dynamics.
5. Theoretical Convergence Properties
The algorithm achieves an regret bound with the number of epochs, even in the presence of adaptive power/noise allocation, multicast communication, and unbalanced topology:
where and depend on network size, model dimension, per-epoch gradient bounds, and privacy/noise parameters. The analysis leverages the properties of the decentralized stochastic update, the projection step, and the compensation for row-stochastic communication. The rate compares favorably to standard benchmarks and demonstrates resilience to the inclusion of privacy noise and practical over-the-air transmission constraints.
6. Numerical Results and Communication Efficiency
Experiments demonstrate that the proposed algorithm yields higher accuracy than prior work under equivalent privacy constraints (such as mPEDFL), particularly due to:
- The use of a single multicast message per epoch (with a unified power split for all neighbors) rather than separate unicast transmissions, greatly reducing the number of required channel uses.
- Faster convergence in terms of communication epochs and channel uses when measured for fixed privacy budgets ().
- Consistent test performance superiority under both moderate and strong privacy regimes (e.g., on MNIST).
7. Mathematical Summary
Key equations and constraints include:
| Expression | Role | Notes |
|---|---|---|
| Transmitted multicast signal | Power split for model and noise | |
| , | Adjacency matrix | Encodes channel gain- and power-adaptive weights |
| Sensitivity | For DP calibration | |
| Per-link privacy leakage | From Theorem 1 | |
| s.t. | Power allocation optimization | For all |
8. Significance and Application Domains
This class of power-controlled privacy-preserving decentralized algorithms is relevant wherever distributed wireless clients seek to jointly train shared models subject to both energy and privacy constraints. Typical domains include federated learning over wireless edge networks, IoT deployments, collaborative mobile ML, and sensor fusion in smart infrastructures. By tuning transmit power allocation and privacy noise injection in concert and exploiting over-the-air aggregation, these methods enable scalable, accurate, and privacy-compliant learning under real-world wireless network constraints and channel heterogeneity.
This approach is distinguishable from prior art by its explicit joint control of "model power" versus "noise power," analytic compensation for network imbalance, and communication-efficient multicast protocol rooted in wireless channel properties (Ziaeddini et al., 25 Sep 2025).