Extend privacy backdoors to parameter-efficient finetuning (LoRA)
Extend the privacy backdoor construction presented for full finetuning to parameter-efficient finetuning methods, specifically Low-Rank Adaptation (LoRA), by designing backdoors that function under LoRA-based updates during downstream training.
References
We leave an extension to other types of finetuning (e.g., LoRA (Hu et al., 2021)) for future work.
— Privacy Backdoors: Stealing Data with Corrupted Pretrained Models
(2404.00473 - Feng et al., 30 Mar 2024) in Section 3 (Threat Model)