RobWE: Robust Watermark Embedding for Personalized Federated Learning Model Ownership Protection (2402.19054v1)
Abstract: Embedding watermarks into models has been widely used to protect model ownership in federated learning (FL). However, existing methods are inadequate for protecting the ownership of personalized models acquired by clients in personalized FL (PFL). This is due to the aggregation of the global model in PFL, resulting in conflicts over clients' private watermarks. Moreover, malicious clients may tamper with embedded watermarks to facilitate model leakage and evade accountability. This paper presents a robust watermark embedding scheme, named RobWE, to protect the ownership of personalized models in PFL. We first decouple the watermark embedding of personalized models into two parts: head layer embedding and representation layer embedding. The head layer belongs to clients' private part without participating in model aggregation, while the representation layer is the shared part for aggregation. For representation layer embedding, we employ a watermark slice embedding operation, which avoids watermark embedding conflicts. Furthermore, we design a malicious watermark detection scheme enabling the server to verify the correctness of watermarks before aggregating local models. We conduct an exhaustive experimental evaluation of RobWE. The results demonstrate that RobWE significantly outperforms the state-of-the-art watermark embedding schemes in FL in terms of fidelity, reliability, and robustness.
- Layout representation learning with spatial and structural hierarchies. In AAAI, pages 206–214, 2023.
- DeepMarks: a secure fingerprinting framework for digital rights management of deep learning models. In ICMR, pages 105–113, 2019.
- MetaFed: Federated learning among federations with cyclic knowledge distillation for personalized healthcare. IEEE Transactions on Neural Networks and Learning Systems, pages 1–12, 2023.
- Exploiting shared representations for personalized federated learning. In ICML, pages 2089–2099, 2021.
- DeepSigns: an end-to-end watermarking framework for ownership protection of deep neural networks. In ASPLOS, pages 485–497, 2019.
- Model-sharing games: Analyzing federated learning under voluntary participation. In AAAI, pages 5303–5311, 2021.
- Personalized federated learning with theoretical guarantees: a model-agnostic meta-learning approach. In NeurIPS, pages 3557–3568, 2020.
- Rethinking deep neural network ownership verification: embedding passports to defeat ambiguity attacks. In NeurIPS, 2019.
- DeepIP: deep neural network intellectual property protection with passports. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–1, 2021.
- An efficient framework for clustered federated learning. pages 19586–19597, 2020.
- Evolutionary trigger set generation for dnn black-box watermarking. 2019.
- Cater: intellectual property protection on text generation apis via conditional watermarks. In NeurIPS, pages 5431–5445, 2022.
- Exploring social biases of large language models in a college artificial intelligence course. In AAAI, pages 15825–15833, 2023.
- Thieves on sesame street! model extraction of BERT-based APIs. In ICLR, 2020.
- Learning multiple layers of features from tiny images. 2009.
- Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- FedIPR: ownership verification for federated deep neural network models. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–16, 2022a.
- Federated learning on Non-IID data silos: an experimental study. In ICDE, pages 965–978, 2022b.
- Communication-censored distributed stochastic gradient descent. IEEE Transactions on Neural Networks and Learning Systems, 33(11):6831–6843, 2022c.
- FedCIP: federated client intellectual property protection with traitor tracking. CoRR, abs/2306.01356, 2023.
- Protecting IP of deep neural networks with watermarking using logistic disorder generation trigger sets. Multimedia Tools and Applications, 2023.
- Secure federated learning model verification: a client-side backdoor triggered watermarking scheme. In SMC, pages 2414–2419, 2021.
- Ordinal unsupervised domain adaptation with recursively conditional gaussian imposed variational disentanglement. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–14, 2022.
- Communication-efficient learning of deep networks from decentralized data. 2017.
- Hybrid anomaly detection and prioritization for network logs at cloud scale. In EuroSys, page 236–250, 2022.
- Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. IEEE Transactions on Neural Networks and Learning Systems, 32(8):3710–3722, 2021.
- Compression of neural machine translation models via pruning. 2016.
- FedTracker: furnishing ownership verification and traceability for federated learning model. CoRR, abs/2211.07160, 2022.
- Model stealing attacks against inductive graph neural networks. In S&P, pages 1175–1192, 2022.
- Waffle: watermarking in federated learning. In SRDS, pages 310–320, 2021.
- Stealing machine learning models via prediction APIs. In USENIX Security, pages 601–618, 2016.
- Embedding watermarks into deep neural networks. In ICMR, pages 269–277, 2017.
- RIGA: covert and robust white-box watermarking of deep neural networks. In WWW, pages 993–1004, 2021.
- Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017.
- FedZKP: Federated model ownership verification with zero-knowledge proof. ArXiv, abs/2305.04507, 2023a.
- FedSOV: federated model secure ownership verification with unforgeable signature. arXiv:2305.06085, 2023b.
- Protecting intellectual property of deep neural networks with watermarking. In ASIACCS, pages 159–172, 2018.
- Deep model intellectual property protection via deep watermarking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(8):4005–4020, 2022.
- The secret revealer: Generative model-inversion attacks against deep neural networks. In CVPR, pages 250–258, 2020.
- Fair representation learning for recommendation: A mutual information perspective. In AAAI, pages 4911–4919. AAAI Press, 2023a.
- Learning video representations from large language models. In CVPR, pages 6586–6597, 2023b.
- Yang Xu (277 papers)
- Yunlin Tan (1 paper)
- Cheng Zhang (388 papers)
- Kai Chi (1 paper)
- Peng Sun (210 papers)
- Wenyuan Yang (35 papers)
- Ju Ren (33 papers)
- Hongbo Jiang (11 papers)
- Yaoxue Zhang (27 papers)