Unlearnable Examples Give a False Sense of Data Privacy: Understanding and Relearning (2306.02064v2)
Abstract: Unlearnable examples are proposed to prevent third parties from exploiting unauthorized data, which generates unlearnable examples by adding imperceptible perturbations to public publishing data. These unlearnable examples proficiently misdirect the model training process, leading it to focus on learning perturbation features while neglecting the semantic features of the image. In this paper, we make an in-depth analysis and observe that models can learn both image features and perturbation features of unlearnable examples at an early training stage, but are rapidly trapped in perturbation features learning since the shallow layers tend to learn on perturbation features and propagate harmful activations to deeper layers. Based on the observations, we propose Progressive Staged Training, a self-adaptive training framework specially designed to break unlearnable examples. The proposed framework effectively prevents models from becoming trapped in learning perturbation features. We evaluated our method on multiple model architectures over diverse datasets, e.g., CIFAR-10, CIFAR-100, and ImageNet-mini. Our method circumvents the unlearnability of all state-of-the-art methods in the literature, revealing that existing unlearnable examples give a false sense of privacy protection and provide a reliable baseline for further evaluation of unlearnable techniques.