Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Fine-Tuning with Domain Adaptation for Privacy-Preserving Vision Transformer (2401.05126v2)

Published 10 Jan 2024 in cs.CV and cs.LG

Abstract: We propose a novel method for privacy-preserving deep neural networks (DNNs) with the Vision Transformer (ViT). The method allows us not only to train models and test with visually protected images but to also avoid the performance degradation caused from the use of encrypted images, whereas conventional methods cannot avoid the influence of image encryption. A domain adaptation method is used to efficiently fine-tune ViT with encrypted images. In experiments, the method is demonstrated to outperform conventional methods in an image classification task on the CIFAR-10 and ImageNet datasets in terms of classification accuracy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. H. Kiya, M. AprilPyone, Y. Kinoshita, S. Imaizumi, and S. Shiota, “An overview of compressible and learnable image transformation with secret key and its applications,” APSIPA Transactions on Signal and Information Processing, vol. 11, no. 1, e11, 2022.
  2. T. Chuman, W. Sirichotedumrong, and H. Kiya, “Encryption-then-compression systems using grayscale-based image encryption for jpeg images,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 6, pp. 1515–1525, 2019.
  3. K. Wei, J. Li, C. Ma, M. Ding, W. Chen, J. Wu, M. Tao, and H. V. Poor, “Personalized federated learning with differential privacy and convergence guarantee,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 4488–4503, 2023.
  4. H. Ito, Y. Kinoshita, M. Aprilpyone, and H. Kiya, “Image to perturbation: An image transformation network for generating visually protected images for privacy-preserving deep neural networks,” IEEE Access, vol. 9, pp. 64 629–64 638, 2021.
  5. M. Maung, A. Pyone, and H. Kiya, “Encryption inspired adversarial defense for visual classification,” in 2020 IEEE International Conference on Image Processing (ICIP), 2020, pp. 1681–1685.
  6. M. Tanaka, “Learnable image encryption,” in 2018 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), 2018, pp. 1–2.
  7. W. Sirichotedumrong, Y. Kinoshita, and H. Kiya, “Pixel-based image encryption without key management for privacy-preserving deep neural networks,” IEEE Access, vol. 7, pp. 177 844–177 855, 2019.
  8. K. Madono, M. Tanaka, O. Masaki, and O. Tetsuji, “Block-wise scrambled image recognition using adaptation network,” in Workshop on Artificial Intelligence of Things (AAAI WS), 2020.
  9. M. AprilPyone and H. Kiya, “Privacy-preserving image classification using an isotropic network,” IEEE MultiMedia, vol. 29, no. 2, pp. 23–33, 2022.
  10. Z. Qi, M. AprilPyone, Y. Kinoshita, and H. Kiya, “Privacy-preserving image classification using vision transformer,” EURASIP European Signal Processing Conference, Belgrade, Serbia, August 31, pp. 543–547, 2022.
  11. Z. Qi, M. AprilPyone, and H. Kiya, “Color-neuracrypt: Privacy-preserving color-image classification using extended random neural networks,” 2023.
  12. L. T. Phong, Y. Aono, T. Hayashi, L. Wang, and S. Moriai, “Privacy-preserving deep learning via additively homomorphic encryption,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 5, pp. 1333–1345, 2018.
  13. Y. Wang, J. Lin, and Z. Wang, “An efficient convolution core architecture for privacy-preserving deep learning,” in 2018 IEEE International Symposium on Circuits and Systems (ISCAS), 2018, pp. 1–5.
  14. J. Konečný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” arXiv, 2016. [Online]. Available: https://arxiv.org/abs/1610.05492
  15. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7.   OpenReview.net, 2021.
  16. M. AprilPyone and H. Kiya, “Block-wise image transformation with secret key for adversarially robust defense,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 2709–2723, 2021.
  17. H. Kiya, R. Iijima, M. AprilPyone, and Y. Kinoshita, “Image and model transformation with secret key for vision transformer,” IEICE Transactions on Information and Systems, vol. E106.D, no. 1, pp. 2–11, 2023.
  18. A. H. Chang and B. M. Case, “Attacks on image encryption schemes for privacy-preserving deep neural networks,” 2020.
  19. H. Kiya, T. Nagamori, S. Imaizumi, and S. Shiota, “Privacy-preserving semantic segmentation using vision transformer,” Journal of Imaging, vol. 8, no. 9, 2022. [Online]. Available: https://www.mdpi.com/2313-433X/8/9/233
  20. A. Yala, H. Esfahanizadeh, R. G. L. D. Oliveira, K. R. Duffy, M. Ghobadi, T. S. Jaakkola, V. Vaikuntanathan, R. Barzilay, and M. Medard, “Neuracrypt: Hiding private health data via random neural networks for public training,” 2021.
  21. A. Krizhevsky, “Learning multiple layers of features from tiny images.”   University of Toronto, Tech. Rep., 2009.
  22. J. Howard, “imagenette.” [Online]. Available: https://github.com/fastai/imagenette/
  23. Z. Qi, A. MaungMaung, and H. Kiya, “Privacy-preserving image classification using convmixer with adaptative permutation matrix and block-wise scrambled image encryption,” Journal of Imaging, vol. 9, no. 4, 2023. [Online]. Available: https://www.mdpi.com/2313-433X/9/4/85
Citations (1)

Summary

We haven't generated a summary for this paper yet.