Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 180 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Power-Controlled DP Decentralized Learning

Updated 30 September 2025
  • The paper introduces a joint optimization framework that balances transmit power allocation and privacy noise injection to meet differential privacy guarantees while enhancing model accuracy.
  • It leverages over-the-air multicast aggregation and auxiliary compensation to effectively manage heterogeneous channel gains and unbalanced network topologies.
  • Theoretical analysis demonstrates an O(log T) regret bound, with numerical experiments validating faster convergence and improved performance under varying privacy constraints.

A power-controlled differentially private decentralized learning algorithm is a collaborative machine learning protocol that simultaneously manages the allocation of transmit power and the calibration of privacy-inducing noise in a network of distributed clients communicating over heterogeneous wireless channels. Such algorithms are designed to maintain strong differential privacy guarantees for each client's local data, while adapting energy usage and maximizing overall model accuracy, especially in wireless environments with varying channel gains and multicast communication.

1. Integrated Protocol Design

The setting consists of KK clients, each with a local dataset, connected via wireless multicast channels characterized by heterogeneous and possibly time-varying channel gains (hji|h_{ji}|). Each client ii maintains a model parameter vector xi,tx_{i,t} at iteration tt and participates in decentralized collaborative training. The protocol proceeds through the following components in every epoch:

  • Local Update and Noise Injection: Each node ii computes a local stochastic gradient on its individual objective fi(x;Di)f_i(x; D_i) and performs a gradient descent step. Simultaneously, the node generates a Gaussian noise vector ηi,t\eta_{i,t}—with appropriately chosen variance—to be injected into its transmitted message for privacy protection.
  • Power Splitting and Transmission Signal Construction: Each client is assigned a fixed total transmit power pip_i, which is split at each epoch into two fractions via allocation factors αi,t\alpha_{i,t} (signal) and βi,t=1αi,t\beta_{i,t}=1-\alpha_{i,t} (noise). The transmitted multicast signal is:

x~i,t=αi,tpixi,t+βi,tpiηi,t\tilde{x}_{i,t} = \sqrt{\alpha_{i,t} p_i} \cdot x_{i,t} + \sqrt{\beta_{i,t} p_i} \cdot \eta_{i,t}

This split controls the relative "power" of the true model signal and the injected noise in the wireless channel.

  • Wireless Multicast Aggregation: Due to the superposition properties of the wireless medium ("over-the-air" computation), each node ii receives a signal formed as a weighted sum of its neighbors' transmitted signals, with weights determined by the instantaneous channel gains and power allocations. The weights form the entries of a row-stochastic channel-dependent adjacency matrix AA.
  • Decentralized Update with Network Compensation: The update at node ii involves aggregating the received (possibly noisy) messages from all neighbors,

xi,t+1=ΠΩ{jaij[xj,t+noise]γtzii,tgi,t}x_{i,t+1} = \Pi_\Omega \left\{ \sum_{j} a_{ij} [x_{j,t} + \text{noise}] - \frac{\gamma_t}{z_{ii,t}} g_{i,t} \right\}

where ΠΩ\Pi_\Omega is projection onto the feasible domain, γt\gamma_t is a decay learning rate (e.g., 1/t1/\sqrt{t}), gi,tg_{i,t} is the local (possibly clipped) stochastic gradient, and zii,tz_{ii,t} is an auxiliary variable compensating for the imbalance introduced by the row-stochastic aggregation weights and tracking the left Perron eigenvector of AA.

2. Joint Power Allocation and Noise Control

The central innovation is the joint optimization of transmit power allocation (αi,t\alpha_{i,t}, βi,t\beta_{i,t}) and privacy noise variance (σi,t2\sigma_{i,t}^2) for each client and iteration. The design seeks to:

  • Maximize utility: Allocate as much power as possible to the true model signal (αi,t\alpha_{i,t} large), subject to a fixed total power pip_i.
  • Guarantee privacy: Ensure that the injected noise (βi,t\beta_{i,t} portion and variance σi,t2\sigma_{i,t}^2) is sufficient to achieve a prescribed (ϵ,δ)(\epsilon, \delta)-differential privacy requirement for each client, given the instantaneous channel conditions.

The joint optimization is formalized as: max{αj}j=1Kαj s.t.ϵij({αj},{σk,t},)ϵmax, i,j 0<αj<1\begin{aligned} \max_{\{\alpha_j\}} \quad & \sum_{j=1}^K \alpha_j \ \text{s.t.} \quad & \epsilon_{ij}(\{\alpha_j\}, \{\sigma_{k,t}\}, \ldots) \leq \epsilon_{\max}, \ \forall i,j\ & 0 < \alpha_j < 1 \end{aligned} where ϵij()\epsilon_{ij}(\cdot) captures the privacy leakage on the link from jj to ii, explicitly depending on the actual power split factors, channel gains, learning rate, and noise variance.

3. Differential Privacy Mechanism in Wireless Multicast

The protocol uses the Gaussian mechanism for differential privacy, calibrated to account for the network and wireless channel:

  • Sensitivity Evaluation: The privacy sensitivity is proportional to the product of the learning rate, the (possibly bounded) gradient, and the scaled channel gain:

Δij(t)2Gγtθhjiαj,tpj\Delta_{ij}^{(t)} \leq 2 G \gamma_t \theta |h_{ji}| \sqrt{\alpha_{j,t} p_j}

where GG bounds the stochastic gradient norm, and θ\theta is an auxiliary constant.

  • Noise Calibration: The variance for the additive Gaussian noise is chosen so that the perturbed transmitted output on each link meets:

σ=Δϵ2ln(1.25/δ)\sigma = \frac{\Delta}{\epsilon} \sqrt{2 \ln(1.25/\delta)}

for specified privacy parameters (ϵ,δ)(\epsilon, \delta).

  • Multicast and Aggregation: The superposition and simultaneous transmission enable efficient communication, with all neighbors receiving the same noisy signal.

4. Multicast Network Model and Compensation for Unbalanced Topology

The network is represented by a directed graph G=(V,E)\mathcal{G} = (\mathcal{V}, \mathcal{E}) with clients as nodes and edges determined by multicast reachability. The row-stochasticity of AA (due to the heterogeneity in hji|h_{ji}| and per-epoch power splits) results in possible imbalance. This is compensated by:

  • Using an auxiliary matrix of variables zi,tz_{i,t} to estimate the left Perron eigenvector of AA.
  • Scaling the gradient step direction by the inverse of zii,tz_{ii,t} in the update rule, ensuring unbiased aggregation over the network dynamics.

5. Theoretical Convergence Properties

The algorithm achieves an O(logT)O(\log T) regret bound with TT the number of epochs, even in the presence of adaptive power/noise allocation, multicast communication, and unbalanced topology:

E[Ri(T)]U1+U2(1+logT)\mathbb{E}[\mathcal{R}_i(T)] \leq U_1 + U_2 (1 + \log T)

where U1U_1 and U2U_2 depend on network size, model dimension, per-epoch gradient bounds, and privacy/noise parameters. The analysis leverages the properties of the decentralized stochastic update, the projection step, and the compensation for row-stochastic communication. The rate compares favorably to standard benchmarks and demonstrates resilience to the inclusion of privacy noise and practical over-the-air transmission constraints.

6. Numerical Results and Communication Efficiency

Experiments demonstrate that the proposed algorithm yields higher accuracy than prior work under equivalent privacy constraints (such as mPED2^2FL), particularly due to:

  • The use of a single multicast message per epoch (with a unified power split for all neighbors) rather than separate unicast transmissions, greatly reducing the number of required channel uses.
  • Faster convergence in terms of communication epochs and channel uses when measured for fixed privacy budgets (ϵmax\epsilon_{\max}).
  • Consistent test performance superiority under both moderate and strong privacy regimes (e.g., ϵmax=1,2\epsilon_{\max}=1,2 on MNIST).

7. Mathematical Summary

Key equations and constraints include:

Expression Role Notes
x~i,t=αi,tpixi,t+βi,tpiηi,t\tilde{x}_{i,t} = \sqrt{\alpha_{i,t} p_i} x_{i,t} + \sqrt{\beta_{i,t} p_i} \eta_{i,t} Transmitted multicast signal Power split for model and noise
aij=hjiαj,tpjci,tRa_{ij} = \frac{|h_{ji}| \sqrt{\alpha_{j,t} p_j}}{c_{i,t} R}, aii=1diRa_{ii} = 1 - \frac{d_i}{R} Adjacency matrix Encodes channel gain- and power-adaptive weights
Δij(t)2Gγtθhjiαj,tpj\Delta_{ij}^{(t)} \leq 2G \gamma_t \theta |h_{ji}| \sqrt{\alpha_{j,t} p_j} Sensitivity For DP calibration
ϵij=2Gγtθhjiαjpj[khki2βkpkσk,t2]1/22ln(1.25/δ)\epsilon_{ij} = \frac{2G \gamma_t \theta |h_{ji}| \sqrt{\alpha_j p_j}}{ [\sum_{k} |h_{ki}|^2 \beta_k p_k \sigma_{k,t}^2 ]^{1/2}} \sqrt{2\ln(1.25/\delta)} Per-link privacy leakage From Theorem 1
max{αj}jαj\max_{\{\alpha_j\}} \sum_j \alpha_j s.t. ϵijϵmax\,\epsilon_{ij} \leq \epsilon_{\max} Power allocation optimization For all i,ji,j

8. Significance and Application Domains

This class of power-controlled privacy-preserving decentralized algorithms is relevant wherever distributed wireless clients seek to jointly train shared models subject to both energy and privacy constraints. Typical domains include federated learning over wireless edge networks, IoT deployments, collaborative mobile ML, and sensor fusion in smart infrastructures. By tuning transmit power allocation and privacy noise injection in concert and exploiting over-the-air aggregation, these methods enable scalable, accurate, and privacy-compliant learning under real-world wireless network constraints and channel heterogeneity.

This approach is distinguishable from prior art by its explicit joint control of "model power" versus "noise power," analytic compensation for network imbalance, and communication-efficient multicast protocol rooted in wireless channel properties (Ziaeddini et al., 25 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Power-Controlled Differentially Private Decentralized Learning Algorithm.