Generalizing Differentially Private Decentralized Deep Learning with Multi-Agent Consensus (2306.13892v2)
Abstract: Cooperative decentralized learning relies on direct information exchange between communicating agents, each with access to locally available datasets. The goal is to agree on model parameters that are optimal over all data. However, sharing parameters with untrustworthy neighbors can incur privacy risks by leaking exploitable information. To enable trustworthy cooperative learning, we propose a framework that embeds differential privacy into decentralized deep learning and secures each agent's local dataset during and after cooperative training. We prove convergence guarantees for algorithms derived from this framework and demonstrate its practical utility when applied to subgradient and ADMM decentralized approaches, finding accuracies approaching the centralized baseline while ensuring individual data samples are resilient to inference attacks. Furthermore, we study the relationships between accuracy, privacy budget, and networks' graph properties on collaborative classification tasks, discovering a useful invariance to the communication graph structure beyond a threshold.