Exploring the Impact of Disrupted Peer-to-Peer Communications on Fully Decentralized Learning in Disaster Scenarios (2310.02986v1)
Abstract: Fully decentralized learning enables the distribution of learning resources and decision-making capabilities across multiple user devices or nodes, and is rapidly gaining popularity due to its privacy-preserving and decentralized nature. Importantly, this crowdsourcing of the learning process allows the system to continue functioning even if some nodes are affected or disconnected. In a disaster scenario, communication infrastructure and centralized systems may be disrupted or completely unavailable, hindering the possibility of carrying out standard centralized learning tasks in these settings. Thus, fully decentralized learning can help in this case. However, transitioning from centralized to peer-to-peer communications introduces a dependency between the learning process and the topology of the communication graph among nodes. In a disaster scenario, even peer-to-peer communications are susceptible to abrupt changes, such as devices running out of battery or getting disconnected from others due to their position. In this study, we investigate the effects of various disruptions to peer-to-peer communications on decentralized learning in a disaster setting. We examine the resilience of a decentralized learning process when a subset of devices drop from the process abruptly. To this end, we analyze the difference between losing devices holding data, i.e., potential knowledge, vs. devices contributing only to the graph connectivity, i.e., with no data. Our findings on a Barabasi-Albert graph topology, where training data is distributed across nodes in an IID fashion, indicate that the accuracy of the learning process is more affected by a loss of connectivity than by a loss of data. Nevertheless, the network remains relatively robust, and the learning process can achieve a good level of accuracy.
- A. Martín-Campillo, J. Crowcroft, E. Yoneki, and R. Martí, “Evaluating opportunistic networks in disaster scenarios,” Journal of Network and computer applications, vol. 36, no. 2, pp. 870–880, 2013.
- S. Lee, X. Zheng, J. Hua, H. Vikalo, and C. Julien, “Opportunistic federated learning: An exploration of egocentric collaboration for pervasive computing applications,” in IEEE PerCom, 2021.
- L. Palmieri, L. Valerio, C. Boldrini, and A. Passarella, “The effect of network topologies on fully decentralized learning: a preliminary investigation,” in Proceedings of the 1st International Workshop on Networked AI Systems, 2023, pp. 1–6.
- H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. Agüera y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in AISTATS’17, 2017.
- R. Albert, H. Jeong, and A.-L. Barabási, “Error and attack tolerance of complex networks,” Nature, vol. 406, no. 6794, pp. 378–382, Jul. 2000.
- P. Holme, B. J. Kim, C. N. Yoon, and S. K. Han, “Attack vulnerability of complex networks,” Physical Review E, vol. 65, no. 5, May 2002.
- A. G. Roy, S. Siddiqui, S. Pölsterl, N. Navab, and C. Wachinger, “BrainTorrent: A Peer-to-Peer Environment for Decentralized Federated Learning,” arXiv 1905.06731, pp. 1–9, 2019.
- S. Savazzi, M. Nicoli, and V. Rampa, “Federated Learning With Cooperating Devices: A Consensus Approach for Massive IoT Networks,” IEEE Internet of Things Journal, vol. 7, no. 5, pp. 4641–4654, 2020.
- T. Sun, D. Li, and B. Wang, “Decentralized federated averaging,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 04, pp. 4289–4301, 2023.
- P. Kairouz, H. B. McMahan, and B. Avent et al, “Advances and Open Problems in Federated Learning,” Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021.
- R. S. Burt, “Structural holes and good ideas,” American journal of sociology, vol. 110, no. 2, pp. 349–399, 2004.
- Y. LeCun, “The mnist database of handwritten digits,” http://yann.lecun.com/exdb/mnist/, 1998.