Papers
Topics
Authors
Recent
Search
2000 character limit reached

PARDON: Privacy-Aware and Robust Federated Domain Generalization

Published 30 Oct 2024 in cs.LG, cs.CV, and cs.DC | (2410.22622v2)

Abstract: Federated Learning (FL) shows promise in preserving privacy and enabling collaborative learning. However, most current solutions focus on private data collected from a single domain. A significant challenge arises when client data comes from diverse domains (i.e., domain shift), leading to poor performance on unseen domains. Existing Federated Domain Generalization approaches address this problem but assume each client holds data for an entire domain, limiting their practicality in real-world scenarios with domain-based heterogeneity and client sampling. In addition, certain methods enable information sharing among clients, raising privacy concerns as this information could be used to reconstruct sensitive private data. To overcome this, we introduce FISC, a novel FedDG paradigm designed to robustly handle more complicated domain distributions between clients while ensuring security. FISC enables learning across domains by extracting an interpolative style from local styles and employing contrastive learning. This strategy gives clients multi-domain representations and unbiased convergent targets. Empirical results on multiple datasets, including PACS, Office-Home, and IWildCam, show FISC outperforms state-of-the-art (SOTA) methods. Our method achieves accuracy on unseen domains, with improvements ranging from 3.64% to 57.22% on unseen domains. Our code is available at https://github.com/judydnguyen/PARDON-FedDG.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (62)
  1. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.
  2. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Scientific reports, 10(1):12598, 2020.
  3. Federated learning for internet of things: A comprehensive survey. IEEE Communications Surveys & Tutorials, 23(3):1622–1658, 2021a.
  4. Federated optimization in heterogeneous networks. Proceedings of Machine learning and systems, 2:429–450, 2020.
  5. Scaffold: Stochastic controlled averaging for federated learning. In International conference on machine learning, pages 5132–5143. PMLR, 2020.
  6. Local learning matters: Rethinking data heterogeneity in federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8397–8406, 2022.
  7. Metavers: Meta-learned versatile representations for personalized federated learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2587–2596, 2024.
  8. Federated learning with matched averaging. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=BkluqlSFDS.
  9. Federated learning with non-iid data via local drift decoupling and correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, pages 18–24, 2022.
  10. Model-contrastive federated learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10713–10722, 2021.
  11. Feddrl: Deep reinforcement learning-based adaptive aggregation for non-iid data in federated learning. In Proceedings of the 51st International Conference on Parallel Processing, pages 1–11, 2022a.
  12. Refuge challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Medical image analysis, 59:101570, 2020.
  13. Domaindrop: Suppressing domain-sensitive channels for domain generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19114–19124, 2023.
  14. Rethinking federated learning with domain shift: A prototype view. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16312–16322. IEEE, 2023.
  15. Deep coral: Correlation alignment for deep domain adaptation. In Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part III 14, pages 443–450. Springer, 2016.
  16. Domain invariant representation learning with domain density transformations. Advances in Neural Information Processing Systems, 34:5264–5275, 2021b.
  17. Discriminative feature alignment: Improving transferability of unsupervised domain adaptation by gaussian-guided latent alignment. Pattern Recognition, 116:107943, 2021.
  18. Learning to generalize: Meta-learning for domain generalization. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
  19. Feddg: Federated domain generalization on medical image segmentation via episodic learning in continuous frequency space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1013–1023, 2021a.
  20. Fedsr: A simple and effective domain generalization method for federated learning. Advances in Neural Information Processing Systems, 35:38831–38843, 2022b.
  21. Gradient masked averaging for federated learning. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=REAyrhRYAo.
  22. Federated domain generalization with generalization adjustment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3954–3963, 2023.
  23. Benchmarking algorithms for federated domain generalization. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=wprSv7ichW.
  24. Client selection in federated learning: Principles, challenges, and opportunities. IEEE Internet of Things Journal, 2023.
  25. Federated learning with client subsampling, data heterogeneity, and unbounded smoothness: A new algorithm and lower bounds. Advances in Neural Information Processing Systems, 36, 2024.
  26. Generalizing to unseen domains: A survey on domain generalization. IEEE Transactions on Knowledge and Data Engineering, 2022.
  27. Benchmarking algorithms for federated domain generalization. arXiv preprint arXiv:2307.04942, 2023.
  28. Federated domain generalization for image recognition via cross-client style transfer. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 361–370, 2023.
  29. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision, pages 1501–1510, 2017.
  30. Efficient parameter-free clustering using first neighbor relations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8934–8943, 2019.
  31. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815–823, 2015.
  32. A tutorial on the cross-entropy method. Annals of operations research, 134:19–67, 2005.
  33. Pytorch. Programming with TensorFlow: Solution for Edge Computing Applications, pages 87–104, 2021.
  34. Deeper, broader and artier domain generalization. In Proceedings of the IEEE international conference on computer vision, pages 5542–5550, 2017.
  35. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5018–5027, 2017.
  36. Wilds: A benchmark of in-the-wild distribution shifts. In International Conference on Machine Learning, pages 5637–5664. PMLR, 2021.
  37. Towards faster and stabilized gan training for high-fidelity few-shot image synthesis. In International conference on learning representations, 2020.
  38. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
  39. Maximilian Seitzer. pytorch-fid: FID Score for PyTorch. https://github.com/mseitzer/pytorch-fid, August 2020. Version 0.3.0.
  40. Generalizing across domains via cross-gradient training. arXiv preprint arXiv:1804.10745, 2018.
  41. Domain generalization with mixstyle. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=6xHJ37MVxxp.
  42. Representation learning via invariant causal mechanisms. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=9p2ekP904Rs.
  43. Knowledge distillation-based domain-invariant representation learning for domain generalization. IEEE Transactions on Multimedia, 2023.
  44. Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10174–10183, 2022.
  45. Gradient matching for domain generalization. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=vDwBW49HmO.
  46. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
  47. Efficient domain generalization via common-specific low-rank decomposition. In International Conference on Machine Learning, pages 7728–7738. PMLR, 2020.
  48. A fourier-based framework for domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14383–14392, 2021.
  49. Domain generalization by solving jigsaw puzzles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2229–2238, 2019.
  50. Lower bounds and optimal algorithms for personalized federated learning. Advances in Neural Information Processing Systems, 33:2304–2315, 2020.
  51. Rethinking architecture design for tackling data heterogeneity in federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10061–10071, 2022.
  52. Learning across domains and devices: Style-driven source-free domain adaptation in clustered federated learning. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 444–454, 2023.
  53. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  54. Kihyuk Sohn. Improved deep metric learning with multi-class n-pair loss objective. Advances in neural information processing systems, 29, 2016.
  55. Regularization of deep neural network with batch contrastive loss. IEEE Access, 9:124409–124418, 2021.
  56. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
  57. Federated learning from pre-trained models: A contrastive learning approach. Advances in neural information processing systems, 35:19332–19344, 2022.
  58. Eliminating domain bias for federated learning in representation space. Advances in Neural Information Processing Systems, 36, 2024.
  59. Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097, 2018.
  60. Content and style aware generation of text-line images for handwriting recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12):8846–8860, 2021.
  61. Language-driven artistic style transfer. In European Conference on Computer Vision, pages 717–734. Springer, 2022.
  62. Jpeg robust invertible grayscale. IEEE Transactions on Visualization and Computer Graphics, 28(12):4403–4417, 2021b.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.