Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Multi-Agent Optimization and Learning: A Non-Expansive Operators Perspective (2405.11999v1)

Published 20 May 2024 in math.OC, cs.SY, and eess.SY

Abstract: Multi-agent systems are increasingly widespread in a range of application domains, with optimization and learning underpinning many of the tasks that arise in this context. Different approaches have been proposed to enable the cooperative solution of these optimization and learning problems, including first- and second-order methods, and dual (or Lagrangian) methods, all of which rely on consensus and message-passing. In this article we discuss these algorithms through the lens of non-expansive operator theory, providing a unifying perspective. We highlight the insights that this viewpoint delivers, and discuss how it can spark future original research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. Bastianello N, Carli R, Schenato L and Todescato M (2021), Jun. Asynchronous Distributed Optimization Over Lossy Networks via Relaxed ADMM: Stability and Linear Convergence. IEEE Transactions on Automatic Control 66 (6): 2620–2635. 10.1109/TAC.2020.3011358. Bauschke and Combettes (2017) Bauschke HH and Combettes PL (2017). Convex analysis and monotone operator theory in Hilbert spaces, second ed., CMS books in mathematics, Springer, Cham. Bof et al. (2019) Bof N, Carli R, Notarstefano G, Schenato L and Varagnolo D (2019), Jul. Multiagent Newton–Raphson Optimization Over Lossy Networks. IEEE Transactions on Automatic Control 64 (7): 2983–2990. 10.1109/TAC.2018.2874748. Boyd and Vandenberghe (2004) Boyd S and Vandenberghe L (2004). Convex optimization, Cambridge university press. Chen and Sayed (2013) Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Bauschke HH and Combettes PL (2017). Convex analysis and monotone operator theory in Hilbert spaces, second ed., CMS books in mathematics, Springer, Cham. Bof et al. (2019) Bof N, Carli R, Notarstefano G, Schenato L and Varagnolo D (2019), Jul. Multiagent Newton–Raphson Optimization Over Lossy Networks. IEEE Transactions on Automatic Control 64 (7): 2983–2990. 10.1109/TAC.2018.2874748. Boyd and Vandenberghe (2004) Boyd S and Vandenberghe L (2004). Convex optimization, Cambridge university press. Chen and Sayed (2013) Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Bof N, Carli R, Notarstefano G, Schenato L and Varagnolo D (2019), Jul. Multiagent Newton–Raphson Optimization Over Lossy Networks. IEEE Transactions on Automatic Control 64 (7): 2983–2990. 10.1109/TAC.2018.2874748. Boyd and Vandenberghe (2004) Boyd S and Vandenberghe L (2004). Convex optimization, Cambridge university press. Chen and Sayed (2013) Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Boyd S and Vandenberghe L (2004). Convex optimization, Cambridge university press. Chen and Sayed (2013) Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  2. Bauschke HH and Combettes PL (2017). Convex analysis and monotone operator theory in Hilbert spaces, second ed., CMS books in mathematics, Springer, Cham. Bof et al. (2019) Bof N, Carli R, Notarstefano G, Schenato L and Varagnolo D (2019), Jul. Multiagent Newton–Raphson Optimization Over Lossy Networks. IEEE Transactions on Automatic Control 64 (7): 2983–2990. 10.1109/TAC.2018.2874748. Boyd and Vandenberghe (2004) Boyd S and Vandenberghe L (2004). Convex optimization, Cambridge university press. Chen and Sayed (2013) Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Bof N, Carli R, Notarstefano G, Schenato L and Varagnolo D (2019), Jul. Multiagent Newton–Raphson Optimization Over Lossy Networks. IEEE Transactions on Automatic Control 64 (7): 2983–2990. 10.1109/TAC.2018.2874748. Boyd and Vandenberghe (2004) Boyd S and Vandenberghe L (2004). Convex optimization, Cambridge university press. Chen and Sayed (2013) Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Boyd S and Vandenberghe L (2004). Convex optimization, Cambridge university press. Chen and Sayed (2013) Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  3. Bof N, Carli R, Notarstefano G, Schenato L and Varagnolo D (2019), Jul. Multiagent Newton–Raphson Optimization Over Lossy Networks. IEEE Transactions on Automatic Control 64 (7): 2983–2990. 10.1109/TAC.2018.2874748. Boyd and Vandenberghe (2004) Boyd S and Vandenberghe L (2004). Convex optimization, Cambridge university press. Chen and Sayed (2013) Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Boyd S and Vandenberghe L (2004). Convex optimization, Cambridge university press. Chen and Sayed (2013) Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  4. Boyd S and Vandenberghe L (2004). Convex optimization, Cambridge university press. Chen and Sayed (2013) Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  5. Chen J and Sayed AH (2013), Apr. Distributed Pareto Optimization via Diffusion Strategies. IEEE Journal of Selected Topics in Signal Processing 7 (2): 205–220. 10.1109/JSTSP.2013.2246763. Davis and Yin (2016) Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  6. Davis D and Yin W (2016), Convergence Rate Analysis of Several Splitting Schemes, Glowinski R, Osher SJ and Yin W, (Eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer International Publishing, Cham, 115–163. Giselsson and Boyd (2017) Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  7. Giselsson P and Boyd S (2017), Feb. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM. IEEE Transactions on Automatic Control 62 (2): 532–544. 10.1109/TAC.2016.2564160. Grudzień et al. (2023) Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  8. Grudzień M, Malinovsky G and Richtarik P (2023), Apr., Can 5th Generation Local Training Methods Support Client Sampling? Yes!, Ruiz F, Dy J and van de Meent JW, (Eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 206, PMLR, 1055–1092. Iutzeler and Hendrickx (2019) Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  9. Iutzeler F and Hendrickx JM (2019). A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software 34 (2): 383–405. Jakovetic (2019) Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  10. Jakovetic D (2019), Mar. A Unification and Generalization of Exact Distributed First-Order Methods. IEEE Transactions on Signal and Information Processing over Networks 5 (1): 31–46. 10.1109/TSIPN.2018.2846183. Kia et al. (2019) Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  11. Kia SS, Scoy BV, Cortes J, Freeman RA, Lynch KM and Martinez S (2019), Jun. Tutorial on Dynamic Average Consensus: The Problem, Its Applications, and the Algorithms. IEEE Control Systems Magazine 39 (3): 40–72. 10.1109/MCS.2019.2900783. Li et al. (2020) Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  12. Li T, Sahu AK, Talwalkar A and Smith V (2020), May. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 37 (3): 50–60. 10.1109/MSP.2020.2975749. Nedić and Liu (2018) Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  13. Nedić A and Liu J (2018), May. Distributed Optimization for Control. Annual Review of Control, Robotics, and Autonomous Systems 1 (1): 77–103. 10.1146/annurev-control-060117-105131. Notarstefano et al. (2019) Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  14. Notarstefano G, Notarnicola I and Camisa A (2019). Distributed Optimization for Smart Cyber-Physical Networks. Foundations and Trends® in Systems and Control 7 (3): 253–383. 10.1561/2600000020. Olfati-Saber et al. (2007) Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  15. Olfati-Saber R, Fax JA and Murray RM (2007), Jan. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 95 (1): 215–233. 10.1109/JPROC.2006.887293. Peng et al. (2016) Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  16. Peng Z, Wu T, Xu Y, Yan M and Yin W (2016). Coordinate Friendly Structures, Algorithms and Applications. Annals of Mathematical Sciences and Applications 1 (1): 57–119. 10.4310/AMSA.2016.v1.n1.a2. Ryu and Boyd (2016) Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  17. Ryu EK and Boyd S (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15 (1): 3–43. Sayed (2014) Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  18. Sayed A (2014). Adaptation, Learning, and Optimization over Networks. Foundations and Trends® in Machine Learning 7 (4-5): 311–801. 10.1561/2200000051. Sundararajan et al. (2019) Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  19. Sundararajan A, Scoy BV and Lessard L (2019), Jul., A Canonical Form for First-Order Distributed Optimization Algorithms, 2019 American Control Conference (ACC), IEEE, Philadelphia, PA, USA, 4075–4080. Takezawa et al. (2023) Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  20. Takezawa Y, Niwa K and Yamada M (2023). Communication Compression for Decentralized Learning With Operator Splitting Methods. IEEE Transactions on Signal and Information Processing over Networks 9: 581–595. 10.1109/TSIPN.2023.3307894. Tian et al. (2020) Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  21. Tian Y, Sun Y and Scutari G (2020), Dec. Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization. IEEE Transactions on Automatic Control 65 (12): 5264–5279. 10.1109/TAC.2020.2977940. Tran Dinh et al. (2021) Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  22. Tran Dinh Q, Pham NH, Phan D and Nguyen L (2021), FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization, Ranzato M, Beygelzimer A, Dauphin Y, Liang PS and Vaughan JW, (Eds.), Advances in Neural Information Processing Systems, 34, Curran Associates, Inc., 30326–30338. Xin et al. (2020) Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  23. Xin R, Pu S, Nedić A and Khan UA (2020), Nov. A General Framework for Decentralized Optimization With First-Order Methods. Proceedings of the IEEE 108 (11): 1869–1889. ISSN 1558-2256. 10.1109/JPROC.2020.3024266. Xu et al. (2021) Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  24. Xu J, Tian Y, Sun Y and Scutari G (2021). Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis. IEEE Transactions on Signal Processing 69: 3555–3570. 10.1109/TSP.2021.3086579. Yuan et al. (2016) Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.
  25. Yuan K, Ling Q and Yin W (2016), Jan. On the Convergence of Decentralized Gradient Descent. SIAM Journal on Optimization 26 (3): 1835–1854. 10.1137/130943170.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com