Clarify the Utility–Cost Tradeoff of Privacy-Preserving Techniques in Heterogeneous Federated Learning

Determine the real utility–cost tradeoff of privacy-preserving techniques for federated learning—including differential privacy, homomorphic encryption, secure aggregation, multi-party computation, and trusted execution environments—when deployed under heterogeneous conditions that reflect practical statistical, device, and communication variability, in order to assess their practical viability and impact on model utility and system overhead.

Background

The paper focuses on measuring poisoning-based security risks in federated learning and introduces TFLlib to evaluate attacks under realistic deployment constraints. Privacy-specific risks and defenses are deliberately deferred, noting that production FL stacks rarely integrate many proposed privacy-preserving techniques.

The authors highlight that, under real-world heterogeneity (statistical, device, and communication), the practical costs and utility impacts of these privacy mechanisms are not well understood, motivating a dedicated evaluation to characterize their tradeoffs before adoption in deployed systems.

References

many privacy-preserving techniques proposed in the literature are still rarely integrated into production FL stacks, and their real utility-cost tradeoff under heterogeneous deployments remains unclear.

Unveiling the Security Risks of Federated Learning in the Wild: From Research to Practice  (2603.20615 - Chen et al., 21 Mar 2026) in Section 7, Future Work