Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Securing Federated Learning with Control-Flow Attestation: A Novel Framework for Enhanced Integrity and Resilience against Adversarial Attacks (2403.10005v1)

Published 15 Mar 2024 in cs.CR

Abstract: The advent of Federated Learning (FL) as a distributed machine learning paradigm has introduced new cybersecurity challenges, notably adversarial attacks that threaten model integrity and participant privacy. This study proposes an innovative security framework inspired by Control-Flow Attestation (CFA) mechanisms, traditionally used in cybersecurity, to ensure software execution integrity. By integrating digital signatures and cryptographic hashing within the FL framework, we authenticate and verify the integrity of model updates across the network, effectively mitigating risks associated with model poisoning and adversarial interference. Our approach, novel in its application of CFA principles to FL, ensures contributions from participating nodes are authentic and untampered, thereby enhancing system resilience without compromising computational efficiency or model performance. Empirical evaluations on benchmark datasets, MNIST and CIFAR-10, demonstrate our framework's effectiveness, achieving a 100\% success rate in integrity verification and authentication and notable resilience against adversarial attacks. These results validate the proposed security enhancements and open avenues for more secure, reliable, and privacy-conscious distributed machine learning solutions. Our work bridges a critical gap between cybersecurity and distributed machine learning, offering a foundation for future advancements in secure FL.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. J. Konečný, H. B. McMahan, D. Ramage, and P. Richtárik, “Federated optimization: Distributed machine learning for on-device intelligence,” arXiv preprint arXiv:1610.02527, 2016.
  2. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial Intelligence and Statistics, 2017, pp. 1273–1282.
  3. T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50–60, 2020.
  4. Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019.
  5. E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” arXiv preprint arXiv:1807.00459, 2020.
  6. L. Lyu, J. Yu, K. Nandakumar, Y. Li, X. Jin, and H. Yu, “Threats to Federated Learning: A Survey,” arXiv preprint arXiv:2003.02133, 2020.
  7. Z. Sun, P. Kairouz, A. T. Suresh, and H. B. McMahan, “Can you really backdoor federated learning?” arXiv preprint arXiv:1911.07963, 2019.
  8. B. Wang and N. Z. Gong, “Attack to federated learning: A survey,” arXiv preprint arXiv:2003.02133, 2020.
  9. R. C. Geyer, T. Klein, and M. Nabi, “Differentially private federated learning: A client level perspective,” arXiv preprint arXiv:1712.07557, 2017.
  10. S. Schulz and A. R. Sadeghi, “TrustFLow: Establishing trust in the federated learning environment,” arXiv preprint arXiv:1812.00910, 2018.
  11. P. Davidson, R. F. Araujo, and J. Campos, “Controlling the influence of maliciously manipulated data in federated learning,” Journal of Artificial Intelligence Research, vol. 62, pp. 827–855, 2018.
  12. K. Rottmann and L. Smith, “Device-level security and privacy in federated learning,” Network Security, vol. 2020, no. 5, pp. 5–8, 2020.
  13. Y. LeCun, “The MNIST database of handwritten digits,” http://yann.lecun.com/exdb/mnist/, 1998.
  14. A. Krizhevsky, “Learning multiple layers of features from tiny images,” Master’s thesis, University of Toronto, 2009.
Citations (2)

Summary

We haven't generated a summary for this paper yet.