Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FederatedTrust: A Solution for Trustworthy Federated Learning (2302.09844v2)

Published 20 Feb 2023 in cs.CR and cs.AI

Abstract: The rapid expansion of the Internet of Things (IoT) and Edge Computing has presented challenges for centralized Machine and Deep Learning (ML/DL) methods due to the presence of distributed data silos that hold sensitive information. To address concerns regarding data privacy, collaborative and privacy-preserving ML/DL techniques like Federated Learning (FL) have emerged. However, ensuring data privacy and performance alone is insufficient since there is a growing need to establish trust in model predictions. Existing literature has proposed various approaches on trustworthy ML/DL (excluding data privacy), identifying robustness, fairness, explainability, and accountability as important pillars. Nevertheless, further research is required to identify trustworthiness pillars and evaluation metrics specifically relevant to FL models, as well as to develop solutions that can compute the trustworthiness level of FL models. This work examines the existing requirements for evaluating trustworthiness in FL and introduces a comprehensive taxonomy consisting of six pillars (privacy, robustness, fairness, explainability, accountability, and federation), along with over 30 metrics for computing the trustworthiness of FL models. Subsequently, an algorithm named FederatedTrust is designed based on the pillars and metrics identified in the taxonomy to compute the trustworthiness score of FL models. A prototype of FederatedTrust is implemented and integrated into the learning process of FederatedScope, a well-established FL framework. Finally, five experiments are conducted using different configurations of FederatedScope to demonstrate the utility of FederatedTrust in computing the trustworthiness of FL models. Three experiments employ the FEMNIST dataset, and two utilize the N-BaIoT dataset considering a real-world IoT security use case.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (82)
  1. Britta Daffner. Why Artificial Intelligence (AI) will be the technology of 2023, 2023.
  2. Cynthia Rudin. The age of secrecy and unfairness in recidivism prediction. 2019.
  3. Muhammad Uzair. Who is liable when a driverless car crashes? 2021.
  4. Garling Wu. 5 Big Problems With OpenAI’s ChatGPT, 2022.
  5. Virginia Dignum. Responsible artificial intelligence: how to develop and use AI in a responsible way. Springer Nature, 2019.
  6. Trustworthy AI: From Principles to Practices. ACM Comput. Surv., aug 2022. Just Accepted.
  7. Tambiama Madiega. Artificial intelligence act. European Parliament: European Parliamentary Research Service, 2021.
  8. AI HLEG of the European Commission. Ethics guidelines for trustworthy ai, 2019.
  9. Trustworthy AI: A Computational Perspective, 2021.
  10. PROTECTOR: Towards the protection of sensitive data in Europe and the US. Computer Networks, 181:107448, 2020.
  11. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Aarti Singh and Jerry Zhu, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 1273–1282. PMLR, 20–22 Apr 2017.
  12. Federated learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 13(3):1–207, 2019.
  13. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2):1–210, 2021.
  14. Xie, Ning. FederatedTrust: A Trustworthiness Evaluation Framework, 2022.
  15. Fedeval: A benchmark system with a comprehensive evaluation model for federated learning. arXiv preprint arXiv:2011.09655, 2020.
  16. A framework quantifying trustworthiness of supervised machine and deep learning models. In SafeAI2023: The AAAI’s Workshop on Artificial Intelligence Safety, pages 2938–2948, 2023.
  17. Ritual: a platform quantifying the trustworthiness of supervised machine learning. In 2022 18th International Conference on Network and Service Management (CNSM), pages 364–366, 2022.
  18. EaSTFLy: Efficient and secure ternary federated learning. Computers & Security, 94:101824, 2020.
  19. Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1175–1191, 2017.
  20. Differential privacy-enabled federated learning for sensitive health data. arXiv preprint arXiv:1910.02578, 2019.
  21. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557, 2017.
  22. A quantitative metric for privacy leakage in federated learning. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3065–3069. IEEE, 2021.
  23. A taxonomy of attacks on federated learning. IEEE Security & Privacy, 19(2):20–28, 2020.
  24. Local and central differential privacy for robustness and privacy in federated learning. arXiv preprint arXiv:2009.03561, 2020.
  25. Byzantine-robust federated machine learning through adaptive model averaging. arXiv preprint arXiv:1909.05125, 2019.
  26. Backdoor attacks-resilient aggregation based on Robust Filtering of Outliers in federated learning for image classification. Knowledge-Based Systems, 245:108588, 2022.
  27. An empirical evaluation of adversarial examples defences, combinations and robustness scores. In Proceedings of the 2022 ACM on International Workshop on Security and Privacy Analytics, pages 86–92, 2022.
  28. A survey of fairness-aware federated learning. arXiv preprint arXiv:2111.01872, 2021.
  29. Gifair-fl: An approach for group and individual fairness in federated learning. arXiv preprint arXiv:2108.02741, 2021.
  30. Fairfl: A fair federated learning approach to reducing demographic bias in privacy-sensitive classification models. In 2020 IEEE International Conference on Big Data (Big Data), pages 1051–1060. IEEE, 2020.
  31. An efficiency-boosting client selection scheme for federated learning with fairness guarantee. IEEE Transactions on Parallel and Distributed Systems, 32(7):1552–1564, 2020.
  32. Improving fairness for data valuation in horizontal federated learning. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 2440–2453. IEEE, 2022.
  33. Guan Wang. Interpret federated learning with shapley values. arXiv preprint arXiv:1905.04519, 2019.
  34. Christoph Molnar. Interpretable machine learning. BOOKDOWN, 2020.
  35. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.
  36. Evfl: An explainable vertical federated learning for data-oriented artificial intelligence systems. Journal of Systems Architecture, 126:102474, 2022.
  37. Towards an accountable and reproducible federated learning: A factsheets approach. arXiv preprint arXiv:2202.12443, 2022.
  38. IBM Research. AI FactSheets 360, 2022.
  39. Blockfla: Accountable federated learning via hybrid blockchain architecture. In Proceedings of the eleventh ACM conference on data and application security and privacy, pages 101–112, 2021.
  40. Blockflow: An accountable and privacy-preserving solution for federated learning. arXiv preprint arXiv:2007.03856, 2020.
  41. Poster: A reliable and accountable privacy-preserving federated learning framework using the blockchain. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pages 2561–2563, 2019.
  42. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, 15:3454–3469, 2020.
  43. Privacy preserving machine learning with homomorphic encryption and federated learning. Future Internet, 13(4):94, 2021.
  44. Privacy-preserving federated learning framework based on chained secure multiparty computing. IEEE Internet of Things Journal, 8(8):6178–6186, 2020.
  45. A syntactic approach for privacy-preserving federated learning. In ECAI 2020, pages 1762–1769. IOS Press, 2020.
  46. Technical privacy metrics: a systematic survey. ACM Computing Surveys (CSUR), 51(3):1–38, 2018.
  47. Adaptive federated learning via entropy approach. arXiv preprint arXiv:2303.14966, 2023.
  48. Preserving privacy with probabilistic indistinguishability in weighted social networks. IEEE Transactions on Parallel and Distributed Systems, 28(5):1417–1429, 2016.
  49. A systematic literature review on federated machine learning: From a software engineering perspective. ACM Computing Surveys (CSUR), 54(5):1–39, 2021.
  50. Reliable federated learning for mobile networks. IEEE Wireless Communications, 27(2):72–80, 2020.
  51. Mitigating backdoor attacks in federated learning. arXiv preprint arXiv:2011.01767, 2020.
  52. Privacy and robustness in federated learning: Attacks and defenses. arXiv preprint arXiv:2012.06337, 2020.
  53. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics, pages 2938–2948. PMLR, 2020.
  54. Evaluating the robustness of neural networks: An extreme value theory approach. arXiv preprint arXiv:1801.10578, 2018.
  55. Aura: Privacy-preserving augmentation to improve test set diversity in noise suppression applications. arXiv preprint arXiv:2110.04391, 2021.
  56. Metrics, models and measurements in software reliability. In 2012 IEEE 10th international symposium on applied machine intelligence and informatics (SAMI), pages 441–449. IEEE, 2012.
  57. Federated learning on non-IID data: A survey. Neurocomputing, 465:371–390, 2021.
  58. Architectural patterns for the design of federated learning systems. Journal of Systems and Software, 191:111357, 2022.
  59. Federated multi-task learning. Advances in neural information processing systems, 30, 2017.
  60. Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. IEEE transactions on neural networks and learning systems, 32(8):3710–3722, 2020.
  61. Federated learning with personalization layers. arXiv preprint arXiv:1912.00818, 2019.
  62. A survey on federated learning systems: vision, hype and reality for data privacy and protection. IEEE Transactions on Knowledge and Data Engineering, 2021.
  63. Quality inference in federated learning with secure aggregation. arXiv preprint arXiv:2007.06236, 2020.
  64. Fair AI. Business & information systems engineering, 62(4):379–384, 2020.
  65. Federated learning cost disparity for iot devices. arXiv preprint arXiv:2204.08036, 2022.
  66. Fair resource allocation in federated learning. arXiv preprint arXiv:1905.10497, 2019.
  67. Federated learning with class imbalance reduction. In 2021 29th European Signal Processing Conference (EUSIPCO), pages 2174–2178. IEEE, 2021.
  68. Alejandro Barredo Arrieta et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai, 2019.
  69. Towards quantification of explainability in explainable artificial intelligence methods. In The thirty-third international flairs conference, 2020.
  70. Knowledge distillation: A survey. International Journal of Computer Vision, 129(6):1789–1819, 2021.
  71. Explainable Federated Learning: A Lifecycle Dashboard for Industrial Settings. TechRxiv, 2022.
  72. Factsheets: Increasing trust in ai services through supplier’s declarations of conformity. IBM Journal of Research and Development, 63(4/5):6–1, 2019.
  73. Closing the ai accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 33–44, 2020.
  74. Machine learning for alternative mining in pow-based blockchains: Theory, implications and applications. 2022.
  75. A performance evaluation of federated learning algorithms. In Proceedings of the second workshop on distributed infrastructures for deep learning, pages 1–8, 2018.
  76. Decentralized federated learning: Fundamentals, state-of-the-art, frameworks, trends, and challenges. arXiv preprint arXiv:2211.08413, 2022.
  77. Federatedscope: A flexible federated learning platform for heterogeneity. arXiv preprint arxiv.2204.05011, 2022.
  78. Leaf: A benchmark for federated settings, 2018.
  79. N-baiot—network-based detection of iot botnet attacks using deep autoencoders. IEEE Pervasive Computing, 17(3):12–22, 2018.
  80. Multi-objective optimization techniques: a survey of the state-of-the-art and applications: Multi-objective optimization techniques. The European Physical Journal Special Topics, 230(10):2319–2335, 2021.
  81. Cloud-iiot-based electronic health record privacy-preserving by cnn and blockchain-enabled federated learning. IEEE Transactions on Industrial Informatics, 19(1):1080–1087, 2022.
  82. Evaluating the impact of blockchain models for secure and trustworthy electronic healthcare records. IEEE Access, 8:157959–157973, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Pedro Miguel Sánchez Sánchez (27 papers)
  2. Ning Xie (57 papers)
  3. Gérôme Bovet (56 papers)
  4. Gregorio Martínez Pérez (35 papers)
  5. Burkhard Stiller (39 papers)
  6. Alberto Huertas Celdrán (43 papers)
Citations (14)