Dice Question Streamline Icon: https://streamlinehq.com

Extend privacy backdoors to parameter-efficient finetuning (LoRA)

Extend the privacy backdoor construction presented for full finetuning to parameter-efficient finetuning methods, specifically Low-Rank Adaptation (LoRA), by designing backdoors that function under LoRA-based updates during downstream training.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper demonstrates privacy backdoors that trap and later reconstruct individual finetuning samples by tampering with pretrained model weights, assuming victims perform full finetuning (adding a linear classification head and updating all parameters). This assumption is central to their attack design and evaluation across MLPs and transformers (ViT and BERT).

Parameter-efficient finetuning methods such as LoRA are widely adopted in practice, especially in resource-constrained settings, and involve updating low-rank adapters rather than the full set of model parameters. The authors explicitly note that adapting their privacy backdoor techniques to such finetuning regimes remains for future work, highlighting an unresolved extension necessary to assess risk in common modern deployment scenarios.

References

We leave an extension to other types of finetuning (e.g., LoRA (Hu et al., 2021)) for future work.

Privacy Backdoors: Stealing Data with Corrupted Pretrained Models (2404.00473 - Feng et al., 30 Mar 2024) in Section 3 (Threat Model)