Papers
Topics
Authors
Recent
2000 character limit reached

Harmonizing Generalization and Personalization in Federated Prompt Learning (2405.09771v2)

Published 16 May 2024 in cs.LG

Abstract: Federated Prompt Learning (FPL) incorporates large pre-trained Vision-LLMs (VLM) into federated learning through prompt tuning. The transferable representations and remarkable generalization capacity of VLM make them highly compatible with the integration of federated learning. Addressing data heterogeneity in federated learning requires personalization, but excessive focus on it across clients could compromise the model's ability to generalize effectively. To preserve the impressive generalization capability of VLM, it is crucial to strike a balance between personalization and generalization in FPL. To tackle this challenge, we proposed Federated Prompt Learning with CLIP Generalization and low-rank Personalization (FedPGP), which employs pre-trained CLIP to provide knowledge-guidance on the global prompt for improved generalization and incorporates a low-rank adaptation term to personalize the global prompt. Further, FedPGP integrates a prompt-wise contrastive loss to achieve knowledge guidance and personalized adaptation simultaneously, enabling a harmonious balance between personalization and generalization in FPL. We conduct extensive experiments on various datasets to explore base-to-novel generalization in both category-level and domain-level scenarios with heterogeneous data, showing the superiority of FedPGP in balancing generalization and personalization.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (63)
  1. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. arXiv preprint arXiv:2012.13255, 2020.
  2. Federated learning with personalization layers. arXiv preprint arXiv:1912.00818, 2019.
  3. Food-101–mining discriminative components with random forests. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part VI 13, pp.  446–461. Springer, 2014.
  4. Fed-CO2: Cooperation of online and offline models for severe data heterogeneity in federated learning. Advances in Neural Information Processing Systems, 36, 2024.
  5. Knowledge-aware federated active learning with non-iid data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  22279–22289, 2023.
  6. On bridging generic and personalized federated learning for image classification. arXiv preprint arXiv:2107.00778, 2021.
  7. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp.  1597–1607. PMLR, 2020a.
  8. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020b.
  9. Fine-tuning is fine in federated learning. arXiv preprint arXiv:2108.07313, 3, 2021.
  10. Describing textures in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  3606–3613, 2014.
  11. Exploiting shared representations for personalized federated learning. In International conference on machine learning, pp.  2089–2099. PMLR, 2021.
  12. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
  13. Fei-Fei, L. Learning generative visual models from few training examples. In Workshop on Generative-Model Based Vision, IEEE Proc. CVPR, 2004, 2004.
  14. Learning Federated Visual Prompt in Null Space for MRI Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  8064–8073, 2023.
  15. Geodesic flow kernel for unsupervised domain adaptation. In 2012 IEEE conference on computer vision and pattern recognition, pp.  2066–2073. IEEE, 2012.
  16. pFedPrompt: Learning Personalized Prompt for Vision-Language Models in Federated Learning. In Proceedings of the ACM Web Conference 2023, pp.  1364–1374, 2023a.
  17. PromptFL: Let federated participants cooperatively learn prompts instead of models-federated learning in age of foundation model. IEEE Transactions on Mobile Computing, 2023b.
  18. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  770–778, 2016.
  19. Understanding convergence and generalization in federated learning through feature learning theory. In The Twelfth International Conference on Learning Representations, 2023.
  20. Personalized cross-silo federated learning on non-iid data. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pp.  7865–7873, 2021.
  21. Fedpara: Low-rank hadamard product for communication-efficient federated learning. arXiv preprint arXiv:2108.06098, 2021.
  22. Factorized-FL: Personalized Federated Learning with Parameter Factorization & Similarity Matching. Advances in Neural Information Processing Systems, 35:35684–35695, 2022.
  23. Scaling up visual and vision-language representation learning with noisy text supervision. In International conference on machine learning, pp.  4904–4916. PMLR, 2021.
  24. Maple: Multi-modal prompt learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  19113–19122, 2023a.
  25. Self-regulating Prompts: Foundational model adaptation without forgetting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  15190–15200, 2023b.
  26. Learning multiple layers of features from tiny images. 2009.
  27. Cifar-10 (canadian institute for advanced research). URL http://www. cs. toronto. edu/kriz/cifar. html, 5(4):1, 2010.
  28. FedTP: Federated Learning by Transformer Personalization. IEEE Transactions on Neural Networks and Learning Systems, 2023.
  29. Global and local prompts cooperation via optimal transport for federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.
  30. Model-contrastive federated learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  10713–10722, 2021a.
  31. Federated optimization in heterogeneous networks. Proceedings of Machine learning and systems, 2:429–450, 2020.
  32. Ditto: Fair and robust federated learning through personalization. In International Conference on Machine Learning, pp.  6357–6368. PMLR, 2021b.
  33. FedBN: Federated learning on non-iid features via local batch normalization. arXiv preprint arXiv:2102.07623, 2021c.
  34. FedDG: Federated domain generalization on medical image segmentation via episodic learning in continuous frequency space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  1013–1023, 2021.
  35. Understanding and mitigating overfitting in prompt tuning for vision-language models. IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  36. Layer-wised model aggregation for personalized federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  10092–10101, 2022.
  37. Three approaches for personalization with applications to federated learning. arXiv preprint arXiv:2002.10619, 2020.
  38. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp.  1273–1282. PMLR, 2017.
  39. FedProc: Prototypical contrastive federated learning on non-iid data. Future Generation Computer Systems, 143:93–104, 2023.
  40. FedSR: A simple and effective domain generalization method for federated learning. Advances in Neural Information Processing Systems, 35:38831–38843, 2022.
  41. Automated flower classification over a large number of classes. In 2008 Sixth Indian conference on computer vision, graphics & image processing, pp.  722–729. IEEE, 2008.
  42. FedBABU: Towards enhanced representation for federated image classification. arXiv preprint arXiv:2106.06042, 2021.
  43. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pp.  3498–3505. IEEE, 2012.
  44. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pp.  1406–1415, 2019.
  45. Text-driven Prompt Generation for Vision-Language Models in Federated Learning. arXiv preprint arXiv:2310.06123, 2023.
  46. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp.  8748–8763. PMLR, 2021.
  47. Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. IEEE transactions on neural networks and learning systems, 32(8):3710–3722, 2020.
  48. Personalized federated learning using hypernetworks. In International Conference on Machine Learning, pp.  9489–9502. PMLR, 2021.
  49. Cross-domain federated adaptive prompt tuning for clip. arXiv preprint arXiv:2211.07864, 2022.
  50. Personalized federated learning with moreau envelopes. Advances in Neural Information Processing Systems, 33:21394–21405, 2020.
  51. Towards personalized federated learning. IEEE Transactions on Neural Networks and Learning Systems, 2022a.
  52. Federated learning from pre-trained models: A contrastive learning approach. Advances in Neural Information Processing Systems, 35:19332–19344, 2022b.
  53. Federated evaluation of on-device personalization. arXiv preprint arXiv:1910.10252, 2019.
  54. Dual prompt tuning for domain-aware federated learning. arXiv preprint arXiv:2310.03103, 2023.
  55. Simmim: A simple framework for masked image modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  9653–9663, 2022.
  56. Efficient model personalization in federated learning via client-specific prompt generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  19159–19168, 2023.
  57. Federated learning with domain generalization. arXiv preprint arXiv:2111.10487, 2021.
  58. Federated fuzzy neural network with evolutionary rule learning. IEEE Transactions on Fuzzy Systems, 2022.
  59. Personalized federated learning with first order model optimization. arXiv preprint arXiv:2012.08565, 2020.
  60. Reduce communication costs and preserve privacy: Prompt tuning method in federated learning. arXiv preprint arXiv:2208.12268, 2022.
  61. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  16816–16825, 2022a.
  62. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9):2337–2348, 2022b.
  63. Prompt-aligned gradient for prompt tuning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  15659–15669, 2023.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.