Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ZooPFL: Exploring Black-box Foundation Models for Personalized Federated Learning (2310.05143v1)

Published 8 Oct 2023 in cs.AI and cs.LG

Abstract: When personalized federated learning (FL) meets large foundation models, new challenges arise from various limitations in resources. In addition to typical limitations such as data, computation, and communication costs, access to the models is also often limited. This paper endeavors to solve both the challenges of limited resources and personalization. i.e., distribution shifts between clients. To do so, we propose a method named ZOOPFL that uses Zeroth-Order Optimization for Personalized Federated Learning. ZOOPFL avoids direct interference with the foundation models and instead learns to adapt its inputs through zeroth-order optimization. In addition, we employ simple yet effective linear projections to remap its predictions for personalization. To reduce the computation costs and enhance personalization, we propose input surgery to incorporate an auto-encoder with low-dimensional and client-specific embeddings. We provide theoretical support for ZOOPFL to analyze its convergence. Extensive empirical experiments on computer vision and natural language processing tasks using popular foundation models demonstrate its effectiveness for FL on black-box foundation models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Wang Lu (25 papers)
  2. Hao Yu (195 papers)
  3. Jindong Wang (150 papers)
  4. Damien Teney (43 papers)
  5. Haohan Wang (96 papers)
  6. Yiqiang Chen (44 papers)
  7. Qiang Yang (202 papers)
  8. Xing Xie (220 papers)
  9. Xiangyang Ji (157 papers)
Citations (6)