Contrastive Learning and Adversarial Disentanglement for Privacy-Aware Task-Oriented Semantic Communication
Abstract: Task-oriented semantic communication systems have emerged as a promising approach to achieving efficient and intelligent data transmission in next-generation networks, where only information relevant to a specific task is communicated. This is particularly important in 6G-enabled Internet of Things (6G-IoT) scenarios, where bandwidth constraints, latency requirements, and data privacy are critical. However, existing methods struggle to fully disentangle task-relevant and task-irrelevant information, leading to privacy concerns and suboptimal performance. To address this, we propose an information-bottleneck inspired method, named CLAD (contrastive learning and adversarial disentanglement). CLAD utilizes contrastive learning to effectively capture task-relevant features while employing adversarial disentanglement to discard task-irrelevant information. Additionally, due to the absence of reliable and reproducible methods to quantify the minimality of encoded feature vectors, we introduce the Information Retention Index (IRI), a comparative metric used as a proxy for the mutual information between the encoded features and the input. The IRI reflects how minimal and informative the representation is, making it highly relevant for privacy-preserving and bandwidth-efficient 6G-IoT systems. Extensive experiments demonstrate that CLAD outperforms state-of-the-art baselines in terms of semantic extraction, task performance, privacy preservation, and IRI, making it a promising building block for responsible, efficient and trustworthy 6G-IoT services.
- C. E. Shannon, “A mathematical theory of communication,” The Bell Syst. Tech. J., vol. 27, no. 3, pp. 379–423, 1948.
- W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: Applications, trends, technologies, and open research problems,” IEEE Netw., vol. 34, no. 3, pp. 134–142, 2020.
- M. Giordani, M. Polese, M. Mezzavilla, S. Rangan, and M. Zorzi, “Toward 6G networks: Use cases and technologies,” IEEE Commun. Mag, vol. 58, no. 3, pp. 55–61, 2020.
- A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning for computer vision: A brief review,” Comput. Intell. Neuroscience, vol. 2018, no. 1, p. 7068349, 2018.
- I. Yaqoob, L. U. Khan, S. A. Kazmi, M. Imran, N. Guizani, and C. S. Hong, “Autonomous driving cars in smart cities: Recent advances, requirements, and challenges,” IEEE Netw., vol. 34, no. 1, pp. 174–181, 2019.
- T. Andrade and D. Bastos, “Extended reality in iot scenarios: Concepts, applications and future trends,” in 2019 5th Experiment Int. Conf. (exp. at’19). IEEE, 2019, pp. 107–112.
- L. Bariah, Q. Zhao, H. Zou, Y. Tian, F. Bader, and M. Debbah, “Large generative AI models for telecom: The next big thing?” IEEE Commun. Mag., pp. 1–7, 2024.
- D. Gündüz, Z. Qin, I. Estella Aguerri, H. S. Dhillon, Z. Yang, A. Yener, K. K. Wong, and C.-B. Chae, “Guest editorial special issue on beyond transmitting bits: Context, semantics, and task-oriented communications,” IEEE J. Sel. Areas Commun., vol. 41, no. 1, pp. 1–4, 2023.
- Y. Shi, Y. Zhou, D. Wen, Y. Wu, C. Jiang, and K. B. Letaief, “Task-oriented communications for 6G: Vision, principles, and technologies,” IEEE Wireless Commun., vol. 30, no. 3, pp. 78–85, 2023.
- Y. Mehmood, F. Ahmad, I. Yaqoob, A. Adnane, M. Imran, and S. Guizani, “Internet-of-things-based smart cities: Recent advances and challenges,” IEEE Commun. Mag., vol. 55, no. 9, pp. 16–24, 2017.
- Q. Mao, F. Hu, and Q. Hao, “Deep learning for intelligent wireless networks: A comprehensive survey,” IEEE Commun. Surv. Tut., vol. 20, no. 4, pp. 2595–2621, 2018.
- J. Shao, Y. Mao, and J. Zhang, “Learning task-oriented communication for edge inference: An information bottleneck approach,” IEEE J. Sel. Areas Commun., vol. 40, no. 1, pp. 197–211, 2021.
- E. Bourtsoulatze, D. B. Kurka, and D. Gündüz, “Deep joint source-channel coding for wireless image transmission,” IEEE Trans. Cogn. Commun. Netw, vol. 5, no. 3, pp. 567–579, 2019.
- S. Xie, S. Ma, M. Ding, Y. Shi, M. Tang, and Y. Wu, “Robust information bottleneck for task-oriented communication with digital modulation,” IEEE J. Sel. Areas Commun., vol. 41, no. 8, pp. 2577–2591, 2023.
- A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, “Deep variational information bottleneck,” in Proc. Int. Conf. Learn. Represent., 2017.
- A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” arXiv preprint arXiv:1807.03748, 2018.
- B. Paige, J.-W. Van De Meent, A. Desmaison, N. Goodman, P. Kohli, F. Wood, P. Torr et al., “Learning disentangled representations with semi-supervised deep generative models,” Proc. Adv. Neural Inf. Process. Syst., vol. 30, 2017.
- N. Tishby, F. C. Pereira, and W. Bialek, “The information bottleneck method,” arXiv preprint physics/0004057, 2000.
- M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2015, pp. 1322–1333.
- T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in Proc of the 37th Int. Conf. Mach. Learn., 2020.
- Y. Tian, C. Sun, B. Poole, D. Krishnan, C. Schmid, and P. Isola, “What makes for good views for contrastive learning?” Proc. Adv. Neural Inf. Process. Syst., vol. 33, pp. 6827–6839, 2020.
- T. Chen, S. Kornblith, K. Swersky, M. Norouzi, and G. E. Hinton, “Big self-supervised models are strong semi-supervised learners,” Proc. Adv. Neural Inf. Process. Syst., vol. 33, pp. 22 243–22 255, 2020.
- P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan, “Supervised contrastive learning,” Proc. Adv. Neural Inf. Process. Syst., vol. 33, pp. 18 661–18 673, 2020.
- I. Higgins, L. Matthey, A. Pal, C. P. Burgess, X. Glorot, M. M. Botvinick, S. Mohamed, and A. Lerchner, “beta-vae: Learning basic visual concepts with a constrained variational framework.” Proc. Int. Conf. Learn. Representations (Poster), vol. 3, 2017.
- H. Kim and A. Mnih, “Disentangling by factorising,” in Proc. Int. Conf. Mach. Learn., 2018, pp. 2649–2658.
- E. H. Sanchez, M. Serrurier, and M. Ortner, “Learning disentangled representations via mutual information estimation,” in Proc. ECCV. Springer, 2020, pp. 205–221.
- Z. Pan, L. Niu, J. Zhang, and L. Zhang, “Disentangled information bottleneck,” in Proc. AAAI, vol. 35, no. 10, 2021, pp. 9285–9293.
- L. Sun, Y. Yang, M. Chen, and C. Guo, “Disentangled information bottleneck guided privacy-protective joint source and channel coding for image transmission,” IEEE Trans. Commun., pp. 1–1, 2024.
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Proc. Adv. Neural Inf. Process. Syst., vol. 27, 2014.
- I. Fischer, “The conditional entropy bottleneck,” Entropy, vol. 22, no. 9, p. 999, 2020.
- R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in Proc. IEEE Symp. Secur. Privacy (SP). IEEE, 2017, pp. 3–18.
- R. Yeung, “A new outlook on shannon’s information measures,” IEEE Trans. Inf. Theory, vol. 37, no. 3, pp. 466–474, 1991.
- R. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler, and Y. Bengio, “Learning deep representations by mutual information estimation and maximization,” in Proc. Int. Conf. Learn. Represent., 2019.
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
- L. Deng, “The MNIST database of handwritten digit images for machine learning research [best of the web],” IEEE Signal Process. Mag., vol. 29, no. 6, pp. 141–142, 2012.
- H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms,” 2017.
- L. Van der Maaten and G. Hinton, “Visualizing data using t-SNE.” J. Mach. Learn. Res., vol. 9, no. 11, 2008.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.