Papers
Topics
Authors
Recent
Search
2000 character limit reached

Contrastive Learning and Adversarial Disentanglement for Privacy-Aware Task-Oriented Semantic Communication

Published 30 Oct 2024 in cs.LG, cs.AI, cs.CV, cs.IT, eess.IV, and math.IT | (2410.22784v3)

Abstract: Task-oriented semantic communication systems have emerged as a promising approach to achieving efficient and intelligent data transmission in next-generation networks, where only information relevant to a specific task is communicated. This is particularly important in 6G-enabled Internet of Things (6G-IoT) scenarios, where bandwidth constraints, latency requirements, and data privacy are critical. However, existing methods struggle to fully disentangle task-relevant and task-irrelevant information, leading to privacy concerns and suboptimal performance. To address this, we propose an information-bottleneck inspired method, named CLAD (contrastive learning and adversarial disentanglement). CLAD utilizes contrastive learning to effectively capture task-relevant features while employing adversarial disentanglement to discard task-irrelevant information. Additionally, due to the absence of reliable and reproducible methods to quantify the minimality of encoded feature vectors, we introduce the Information Retention Index (IRI), a comparative metric used as a proxy for the mutual information between the encoded features and the input. The IRI reflects how minimal and informative the representation is, making it highly relevant for privacy-preserving and bandwidth-efficient 6G-IoT systems. Extensive experiments demonstrate that CLAD outperforms state-of-the-art baselines in terms of semantic extraction, task performance, privacy preservation, and IRI, making it a promising building block for responsible, efficient and trustworthy 6G-IoT services.

Authors (3)
Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. C. E. Shannon, “A mathematical theory of communication,” The Bell Syst. Tech. J., vol. 27, no. 3, pp. 379–423, 1948.
  2. W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: Applications, trends, technologies, and open research problems,” IEEE Netw., vol. 34, no. 3, pp. 134–142, 2020.
  3. M. Giordani, M. Polese, M. Mezzavilla, S. Rangan, and M. Zorzi, “Toward 6G networks: Use cases and technologies,” IEEE Commun. Mag, vol. 58, no. 3, pp. 55–61, 2020.
  4. A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning for computer vision: A brief review,” Comput. Intell. Neuroscience, vol. 2018, no. 1, p. 7068349, 2018.
  5. I. Yaqoob, L. U. Khan, S. A. Kazmi, M. Imran, N. Guizani, and C. S. Hong, “Autonomous driving cars in smart cities: Recent advances, requirements, and challenges,” IEEE Netw., vol. 34, no. 1, pp. 174–181, 2019.
  6. T. Andrade and D. Bastos, “Extended reality in iot scenarios: Concepts, applications and future trends,” in 2019 5th Experiment Int. Conf. (exp. at’19).   IEEE, 2019, pp. 107–112.
  7. L. Bariah, Q. Zhao, H. Zou, Y. Tian, F. Bader, and M. Debbah, “Large generative AI models for telecom: The next big thing?” IEEE Commun. Mag., pp. 1–7, 2024.
  8. D. Gündüz, Z. Qin, I. Estella Aguerri, H. S. Dhillon, Z. Yang, A. Yener, K. K. Wong, and C.-B. Chae, “Guest editorial special issue on beyond transmitting bits: Context, semantics, and task-oriented communications,” IEEE J. Sel. Areas Commun., vol. 41, no. 1, pp. 1–4, 2023.
  9. Y. Shi, Y. Zhou, D. Wen, Y. Wu, C. Jiang, and K. B. Letaief, “Task-oriented communications for 6G: Vision, principles, and technologies,” IEEE Wireless Commun., vol. 30, no. 3, pp. 78–85, 2023.
  10. Y. Mehmood, F. Ahmad, I. Yaqoob, A. Adnane, M. Imran, and S. Guizani, “Internet-of-things-based smart cities: Recent advances and challenges,” IEEE Commun. Mag., vol. 55, no. 9, pp. 16–24, 2017.
  11. Q. Mao, F. Hu, and Q. Hao, “Deep learning for intelligent wireless networks: A comprehensive survey,” IEEE Commun. Surv. Tut., vol. 20, no. 4, pp. 2595–2621, 2018.
  12. J. Shao, Y. Mao, and J. Zhang, “Learning task-oriented communication for edge inference: An information bottleneck approach,” IEEE J. Sel. Areas Commun., vol. 40, no. 1, pp. 197–211, 2021.
  13. E. Bourtsoulatze, D. B. Kurka, and D. Gündüz, “Deep joint source-channel coding for wireless image transmission,” IEEE Trans. Cogn. Commun. Netw, vol. 5, no. 3, pp. 567–579, 2019.
  14. S. Xie, S. Ma, M. Ding, Y. Shi, M. Tang, and Y. Wu, “Robust information bottleneck for task-oriented communication with digital modulation,” IEEE J. Sel. Areas Commun., vol. 41, no. 8, pp. 2577–2591, 2023.
  15. A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, “Deep variational information bottleneck,” in Proc. Int. Conf. Learn. Represent., 2017.
  16. A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” arXiv preprint arXiv:1807.03748, 2018.
  17. B. Paige, J.-W. Van De Meent, A. Desmaison, N. Goodman, P. Kohli, F. Wood, P. Torr et al., “Learning disentangled representations with semi-supervised deep generative models,” Proc. Adv. Neural Inf. Process. Syst., vol. 30, 2017.
  18. N. Tishby, F. C. Pereira, and W. Bialek, “The information bottleneck method,” arXiv preprint physics/0004057, 2000.
  19. M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2015, pp. 1322–1333.
  20. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in Proc of the 37th Int. Conf. Mach. Learn., 2020.
  21. Y. Tian, C. Sun, B. Poole, D. Krishnan, C. Schmid, and P. Isola, “What makes for good views for contrastive learning?” Proc. Adv. Neural Inf. Process. Syst., vol. 33, pp. 6827–6839, 2020.
  22. T. Chen, S. Kornblith, K. Swersky, M. Norouzi, and G. E. Hinton, “Big self-supervised models are strong semi-supervised learners,” Proc. Adv. Neural Inf. Process. Syst., vol. 33, pp. 22 243–22 255, 2020.
  23. P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan, “Supervised contrastive learning,” Proc. Adv. Neural Inf. Process. Syst., vol. 33, pp. 18 661–18 673, 2020.
  24. I. Higgins, L. Matthey, A. Pal, C. P. Burgess, X. Glorot, M. M. Botvinick, S. Mohamed, and A. Lerchner, “beta-vae: Learning basic visual concepts with a constrained variational framework.” Proc. Int. Conf. Learn. Representations (Poster), vol. 3, 2017.
  25. H. Kim and A. Mnih, “Disentangling by factorising,” in Proc. Int. Conf. Mach. Learn., 2018, pp. 2649–2658.
  26. E. H. Sanchez, M. Serrurier, and M. Ortner, “Learning disentangled representations via mutual information estimation,” in Proc. ECCV.   Springer, 2020, pp. 205–221.
  27. Z. Pan, L. Niu, J. Zhang, and L. Zhang, “Disentangled information bottleneck,” in Proc. AAAI, vol. 35, no. 10, 2021, pp. 9285–9293.
  28. L. Sun, Y. Yang, M. Chen, and C. Guo, “Disentangled information bottleneck guided privacy-protective joint source and channel coding for image transmission,” IEEE Trans. Commun., pp. 1–1, 2024.
  29. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Proc. Adv. Neural Inf. Process. Syst., vol. 27, 2014.
  30. I. Fischer, “The conditional entropy bottleneck,” Entropy, vol. 22, no. 9, p. 999, 2020.
  31. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in Proc. IEEE Symp. Secur. Privacy (SP).   IEEE, 2017, pp. 3–18.
  32. R. Yeung, “A new outlook on shannon’s information measures,” IEEE Trans. Inf. Theory, vol. 37, no. 3, pp. 466–474, 1991.
  33. R. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler, and Y. Bengio, “Learning deep representations by mutual information estimation and maximization,” in Proc. Int. Conf. Learn. Represent., 2019.
  34. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
  35. L. Deng, “The MNIST database of handwritten digit images for machine learning research [best of the web],” IEEE Signal Process. Mag., vol. 29, no. 6, pp. 141–142, 2012.
  36. H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms,” 2017.
  37. L. Van der Maaten and G. Hinton, “Visualizing data using t-SNE.” J. Mach. Learn. Res., vol. 9, no. 11, 2008.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.