Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
124 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sentinel-Guided Zero-Shot Learning: A Collaborative Paradigm without Real Data Exposure (2403.09363v1)

Published 14 Mar 2024 in cs.CV

Abstract: With increasing concerns over data privacy and model copyrights, especially in the context of collaborations between AI service providers and data owners, an innovative SG-ZSL paradigm is proposed in this work. SG-ZSL is designed to foster efficient collaboration without the need to exchange models or sensitive data. It consists of a teacher model, a student model and a generator that links both model entities. The teacher model serves as a sentinel on behalf of the data owner, replacing real data, to guide the student model at the AI service provider's end during training. Considering the disparity of knowledge space between the teacher and student, we introduce two variants of the teacher model: the omniscient and the quasi-omniscient teachers. Under these teachers' guidance, the student model seeks to match the teacher model's performance and explores domains that the teacher has not covered. To trade off between privacy and performance, we further introduce two distinct security-level training protocols: white-box and black-box, enhancing the paradigm's adaptability. Despite the inherent challenges of real data absence in the SG-ZSL paradigm, it consistently outperforms in ZSL and GZSL tasks, notably in the white-box protocol. Our comprehensive evaluation further attests to its robustness and efficiency across various setups, including stringent black-box training protocol.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (69)
  1. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016.
  2. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  3. J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” arXiv preprint arXiv:1610.05492, 2016.
  4. C. Dwork, “Differential privacy: A survey of results,” in International conference on theory and applications of models of computation, pp. 1–19, Springer, 2008.
  5. L. Zhang, G. Gao, and H. Zhang, “Spatial-temporal federated learning for lifelong person re-identification on distributed edges,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  6. A. Reisizadeh, A. Mokhtari, H. Hassani, A. Jadbabaie, and R. Pedarsani, “Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization,” in International Conference on Artificial Intelligence and Statistics, pp. 2021–2031, PMLR, 2020.
  7. F. Wan, J. Wang, H. Duan, Y. Song, M. Pagnucco, and Y. Long, “Community-aware federated video summarization,” in 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, IEEE, 2023.
  8. H. Ren, J. Deng, X. Xie, X. Ma, and Y. Wang, “Fedboosting: Federated learning with gradient protected boosting for text recognition,” Neurocomputing, vol. 569, p. 127126, 2024.
  9. S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, “Adaptive federated learning in resource constrained edge computing systems,” IEEE journal on selected areas in communications, vol. 37, no. 6, pp. 1205–1221, 2019.
  10. R. Yu and P. Li, “Toward resource-efficient federated learning in mobile edge computing,” IEEE Network, vol. 35, no. 1, pp. 148–155, 2021.
  11. S. Guo, T. Zhang, G. Xu, H. Yu, T. Xiang, and Y. Liu, “Topology-aware differential privacy for decentralized image classification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 6, pp. 4016–4027, 2021.
  12. N. Fu, W. Ni, S. Zhang, L. Hou, and D. Zhang, “Gc-nldp: A graph clustering algorithm with local differential privacy,” Computers & Security, vol. 124, p. 102967, 2023.
  13. G. Xu, G. Li, S. Guo, T. Zhang, and H. Li, “Secure decentralized image classification with multiparty homomorphic encryption,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 7, pp. 3185–3198, 2023.
  14. G. Xu, Z. Liu, and C. C. Loy, “Computation-efficient knowledge distillation via uncertainty-aware mixup,” arXiv preprint arXiv:2012.09413, 2020.
  15. H. Liu, X. Zhu, Z. Lei, D. Cao, and S. Z. Li, “Fast adapting without forgetting for face recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 8, pp. 3093–3104, 2021.
  16. K. Xu, L. Wang, J. Xin, S. Li, and B. Yin, “Learning from teacher’s failure: A reflective learning paradigm for knowledge distillation,” IEEE Transactions on Circuits and Systems for Video Technology, pp. 1–1, 2023.
  17. L. Beyer, X. Zhai, A. Royer, L. Markeeva, R. Anil, and A. Kolesnikov, “Knowledge distillation: A good teacher is patient and consistent,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10925–10934, 2022.
  18. K. Zhang, C. Zhang, S. Li, D. Zeng, and S. Ge, “Student network learning via evolutionary knowledge distillation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 4, pp. 2251–2263, 2022.
  19. H. Zhang, Y. Long, Y. Guan, and L. Shao, “Triple verification network for generalized zero-shot learning,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 506–517, 2018.
  20. H. Larochelle, D. Erhan, and Y. Bengio, “Zero-data learning of new tasks.,” in AAAI, 2008.
  21. M. R. Vyas, H. Venkateswara, and S. Panchanathan, “Leveraging seen and unseen semantic relationships for generative zero-shot learning,” in ECCV, 2020.
  22. R. Gao, F. Wan, D. Organisciak, J. Pu, H. Duan, P. Zhang, X. Hou, and Y. Long, “Privacy-enhanced zero-shot learning via data-free knowledge transfer,” in 2023 IEEE International Conference on Multimedia and Expo (ICME), pp. 432–437, IEEE, 2023.
  23. H. Zhang, H. Mao, Y. Long, W. Yang, and L. Shao, “A probabilistic zero-shot learning method via latent nonnegative prototype synthesis of unseen classes,” IEEE transactions on neural networks and learning systems, vol. 31, no. 7, pp. 2361–2375, 2019.
  24. D. Jayaraman and K. Grauman, “Zero-shot recognition with unreliable attributes,” in NeurIPS, 2014.
  25. S. Li, L. Wang, S. Wang, D. Kong, and B. Yin, “Hierarchical coupled discriminative dictionary learning for zero-shot learning,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  26. Y. Long, L. Liu, and L. Shao, “Towards fine-grained open zero-shot learning: Inferring unseen visual features from attributes,” in 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 944–952, IEEE, 2017.
  27. Y. Guo, G. Ding, J. Han, and S. Tang, “Zero-shot learning with attribute selection,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, 2018.
  28. Z. Zhang and V. Saligrama, “Zero-shot recognition via structured prediction,” in ECCV, 2016.
  29. Y. Long and L. Shao, “Describing unseen classes by exemplars: Zero-shot learning using grouped simile ensemble,” in WACV, pp. 907–915, IEEE, 2017.
  30. H. Zhang, Y. Long, and L. Shao, “Zero-shot leaning and hashing with binary visual similes,” Multimedia Tools and Applications, vol. 78, pp. 24147–24165, 2019.
  31. J. Qin, Y. Wang, L. Liu, J. Chen, and L. Shao, “Beyond semantic attributes: Discrete latent attributes learning for zero-shot recognition.,” IEEE SPL, vol. 23, no. 11, pp. 1667–1671, 2016.
  32. E. Kodirov, T. Xiang, and S. Gong, “Semantic autoencoder for zero-shot learning,” in CVPR, 2017.
  33. R. Felix, V. B. Kumar, I. Reid, and G. Carneiro, “Multi-modal cycle-consistent generalized zero-shot learning,” in ECCV, 2018.
  34. J. Wang, Y. Jiang, Y. Long, X. Sun, M. Pagnucco, and Y. Song, “Deconfounding causal inference for zero-shot action recognition,” IEEE Transactions on Multimedia, 2023.
  35. R. Gao, X. Hou, J. Qin, J. Chen, L. Liu, F. Zhu, Z. Zhang, and L. Shao, “Zero-vae-gan: Generating unseen features for generalized and transductive zero-shot learning,” IEEE Transactions on Image Processing, 2020.
  36. Y. Xian, T. Lorenz, B. Schiele, and Z. Akata, “Feature generating networks for zero-shot learning,” in CVPR, 2018.
  37. D. Cheng, G. Wang, B. Wang, Q. Zhang, J. Han, and D. Zhang, “Hybrid routing transformer for zero-shot learning,” Pattern Recognition, vol. 137, p. 109270, 2023.
  38. Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid, “Label-embedding for attribute-based classification,” in CVPR, 2013.
  39. Z. Akata, S. Reed, D. Walter, H. Lee, and B. Schiele, “Evaluation of output embeddings for fine-grained image classification,” in CVPR, 2015.
  40. Y. Tian, Y. Kong, Q. Ruan, G. An, and Y. Fu, “Aligned dynamic-preserving embedding for zero-shot action recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 6, pp. 1597–1612, 2019.
  41. Y. Liu, Q. Gao, J. Li, J. Han, and L. Shao, “Zero shot learning via low-rank embedded semantic autoencoder.,” in IJCAI, vol. 8, p. 10, 2018.
  42. Y. Liu, X. Gao, J. Han, L. Liu, and L. Shao, “Zero-shot learning via a specific rank-controlled semantic autoencoder,” Pattern Recognition, vol. 122, p. 108237, 2022.
  43. Y. Long, L. Liu, L. Shao, F. Shen, G. Ding, and J. Han, “From zero-shot learning to conventional supervised classification: Unseen visual data synthesis,” in CVPR, 2017.
  44. B. Romera-Paredes and P. Torr, “An embarrassingly simple approach to zero-shot learning,” in ICML, 2015.
  45. J. Song, C. Shen, Y. Yang, Y. Liu, and M. Song, “Transductive unbiased embedding for zero-shot learning,” in CVPR, pp. 1024–1033, 2018.
  46. Y. Fu, T. M. Hospedales, T. Xiang, and S. Gong, “Transductive multi-view zero-shot learning,” IEEE TPAMI, vol. 37, no. 11, pp. 2332–2345, 2015.
  47. M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. S. Corrado, and J. Dean, “Zero-shot learning by convex combination of semantic embeddings,” in ICLR, 2014.
  48. W.-L. Chao, S. Changpinyo, B. Gong, and F. Sha, “An empirical study and analysis of generalized zero-shot learning for object recognition in the wild,” in ECCV, 2016.
  49. S. Min, H. Yao, H. Xie, C. Wang, Z.-J. Zha, and Y. Zhang, “Domain-aware visual bias eliminating for generalized zero-shot learning,” in CVPR, 2020.
  50. Y. Hu, L. Feng, H. Jiang, M. Liu, and B. Yin, “Domain-aware prototype network for generalized zero-shot learning,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  51. Z. Bu, Y.-X. Wang, S. Zha, and G. Karypis, “Automatic clipping: Differentially private deep learning made easier and stronger,” arXiv preprint arXiv:2206.07136, 2022.
  52. C. Dwork, A. Roth, et al., “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
  53. L. Zhang, B. Shen, A. Barnawi, S. Xi, N. Kumar, and Y. Wu, “Feddpgan: federated differentially private generative adversarial networks framework for the detection of covid-19 pneumonia,” Information Systems Frontiers, vol. 23, no. 6, pp. 1403–1415, 2021.
  54. A. Yousefpour, I. Shilov, A. Sablayrolles, D. Testuggine, K. Prasad, M. Malek, J. Nguyen, S. Ghosh, A. Bharadwaj, J. Zhao, et al., “Opacus: User-friendly differential privacy library in pytorch,” arXiv preprint arXiv:2109.12298, 2021.
  55. C. H. Lampert, H. Nickisch, and S. Harmeling, “Attribute-based classification for zero-shot visual object categorization,” IEEE TPAMI, vol. 36, no. 3, pp. 453–465, 2013.
  56. Y. Xian, C. H. Lampert, B. Schiele, and Z. Akata, “Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly,” in CVPR, 2017.
  57. A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth, “Describing objects by their attributes,” in CVPR, 2009.
  58. H. Zhang, L. Liu, Y. Long, Z. Zhang, and L. Shao, “Deep transductive network for generalized zero shot learning,” Pattern Recognition, vol. 105, p. 107370, 2020.
  59. B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations in convolutional network,” in Proc. ICML Deep Learning Workshop, 2015.
  60. A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov, et al., “Devise: A deep visual-semantic embedding model,” in NeurIPS, 2013.
  61. S. Changpinyo, W.-L. Chao, B. Gong, and F. Sha, “Synthesized classifiers for zero-shot learning,” in CVPR, 2016.
  62. L. Zhang, T. Xiang, and S. Gong, “Learning a deep embedding model for zero-shot learning,” in CVPR, 2017.
  63. Z. Han, Z. Fu, S. Chen, and J. Yang, “Contrastive embedding for generalized zero-shot learning,” in CVPR, 2021.
  64. Z. Chen, Y. Luo, R. Qiu, S. Wang, Z. Huang, J. Li, and Z. Zhang, “Semantics disentangling for generalized zero-shot learning,” in ICCV, 2021.
  65. X. Kong, Z. Gao, X. Li, M. Hong, J. Liu, C. Wang, Y. Xie, and Y. Qu, “En-compactness: Self-distillation embedding & contrastive generation for generalized zero-shot learning,” in CVPR, 2022.
  66. O. Gune, M. Pal, P. Mukherjee, B. Banerjee, and S. Chaudhuri, “Generative model-driven structure aligning discriminative embeddings for transductive zero-shot learning,” arXiv preprint arXiv:2005.04492, 2020.
  67. L. Zhang, P. Wang, L. Liu, C. Shen, W. Wei, Y. Zhang, and A. Van Den Hengel, “Towards effective deep embedding for zero-shot learning,” IEEE Transactions on Circuits and Systems for Video Technology, 2020.
  68. X. Li, D. Zhang, M. Ye, X. Li, Q. Dou, and Q. Lv, “Bidirectional generative transductive zero-shot learning,” Neural computing and applications, pp. 5313–5326, 2021.
  69. A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola, “A kernel two-sample test,” The Journal of Machine Learning Research, vol. 13, no. 1, pp. 723–773, 2012.
Citations (2)

Summary

We haven't generated a summary for this paper yet.