Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Privacy-Preserving Task-Oriented Semantic Communications Against Model Inversion Attacks (2312.03252v1)

Published 6 Dec 2023 in cs.IT and math.IT

Abstract: Semantic communication has been identified as a core technology for the sixth generation (6G) of wireless networks. Recently, task-oriented semantic communications have been proposed for low-latency inference with limited bandwidth. Although transmitting only task-related information does protect a certain level of user privacy, adversaries could apply model inversion techniques to reconstruct the raw data or extract useful information, thereby infringing on users' privacy. To mitigate privacy infringement, this paper proposes an information bottleneck and adversarial learning (IBAL) approach to protect users' privacy against model inversion attacks. Specifically, we extract task-relevant features from the input based on the information bottleneck (IB) theory. To overcome the difficulty in calculating the mutual information in high-dimensional space, we derive a variational upper bound to estimate the true mutual information. To prevent data reconstruction from task-related features by adversaries, we leverage adversarial learning to train encoder to fool adversaries by maximizing reconstruction distortion. Furthermore, considering the impact of channel variations on privacy-utility trade-off and the difficulty in manually tuning the weights of each loss, we propose an adaptive weight adjustment method. Numerical results demonstrate that the proposed approaches can effectively protect privacy without significantly affecting task performance and achieve better privacy-utility trade-offs than baseline methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. D. B. Kurka and D. Gündüz, “Deepjscc-f: Deep joint source-channel coding of images with feedback,” IEEE J. Sel. Areas Inf. Theory, vol. 1, no. 1, pp. 178–193, Apr. 2020.
  2. C. Luo, J. Ji, Q. Wang, X. Chen, and P. Li, “Channel state information prediction for 5g wireless communications: A deep learning approach,” IEEE Trans. Netw. Sci. Eng., vol. 7, no. 1, pp. 227–236, Jun. 2018.
  3. Y. He, C. Liang, F. R. Yu, N. Zhao, and H. Yin, “Optimization of cache-enabled opportunistic interference alignment wireless networks: A big data deep reinforcement learning approach,” in Proc. IEEE Int. Conf. Commun. (ICC), Paris, France, May 2017, pp. 1–6.
  4. H. He, C.-K. Wen, S. Jin, and G. Y. Li, “Model-driven deep learning for mimo detection,” IEEE Trans. Signal Process., vol. 68, pp. 1702–1715, Feb. 2020.
  5. Y. Shen, Y. Shi, J. Zhang, and K. B. Letaief, “Graph neural networks for scalable radio resource management: Architecture design and theoretical analysis,” IEEE J. Sel. Areas Commu., vol. 39, no. 1, pp. 101–115, Nov. 2021.
  6. Z. Weng and Z. Qin, “Semantic communication systems for speech transmission,” IEEE J. Sel. Areas in Commun., vol. 39, no. 8, pp. 2434–2444, Aug. 2021.
  7. J. Shao, Y. Mao, and J. Zhang, “Task-oriented communication for multi-device cooperative edge inference,” IEEE Trans. Wirel. Commun., vol. 22, no. 1, pp. 73–87, July 2023.
  8. E. C. Strinati and B. Sergio, “6G networks: Beyond Shannon towards semantic and goal-oriented communications,” Comput. Netw., vol. 190, p. 107930, May. 2021.
  9. S. Guo, Y. Wang, and P. Zhang, “Signal shaping for semantic communication systems with a few message candidates,” in Proc. IEEE 96th Veh. Technol. Conf. (VTC2022-Fall), London, United Kingdom, Sept. 2022, pp. 1–5.
  10. Z. Qin, X. Tao, J. Lu, W. Tong, and G. Y. Li, “Semantic communications: Principles and challenges,” arXiv preprint arXiv: arXiv.2201.01389, 2022.
  11. M. Jankowski, D. Gündüz, and K. Mikolajczyk, “Deep joint source-channel coding for wireless image retrieval,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), Barcelona, Spain, May 2020, pp. 5070–5074.
  12. ——, “Wireless image retrieval at the edge,” IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 89–100, Jan. 2021.
  13. J. Shao, Y. Mao, and J. Zhang, “Learning task-oriented communication for edge inference: An information bottleneck approach,” IEEE J. Sel. Areas Commun., vol. 40, no. 1, pp. 197–211, Jan. 2022.
  14. J. Shao and J. Zhang, “Bottlenet++: An end-to-end approach for feature compression in device-edge co-inference systems,” in Proc. IEEE Int. Conf. Commun. (ICC), Dublin, Ireland, Jun 2020, pp. 1–6.
  15. M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proc. ACM SIGSAC Conf. Computer Commun. Secur. (CCS), New York, USA, Oct 2015, pp. 1322–1333.
  16. T. Xiao, Y.-H. Tsai, K. Sohn, M. Chandraker, and M.-H. Yang, “Adversarial learning of privacy-preserving and task-oriented representations,” in Proc. AAAI Conf. Artif. Intell. (AAAI), New York, USA, Feb 2020, pp. 12 434–12 441.
  17. A. Li, Y. Duan, H. Yang, Y. Chen, and J. Yang, “TIPRDC: Task-independent privacy-respecting data crowdsourcing framework for deep learning with anonymized intermediate representations,” in Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. (KDD), Virtual, Online, USA, Aug 2020, pp. 824–832.
  18. K. Gai, K.-K. R. Choo, M. Qiu, and L. Zhu, “Privacy-preserving content-oriented wireless communication in internet-of-things,” IEEE Internet Things, vol. 5, no. 4, pp. 3059–3067, Apr. 2018.
  19. L. Sweeney, “k-anonymity: A model for protecting privacy,” Int. J. Uncertain. Fuzz., vol. 10, no. 5, pp. 557–570, Oct. 2002.
  20. A. Machanavajjhala, D. Kifer, J. Gehrke, and M. Venkitasubramaniam, “l-diversity: Privacy beyond k-anonymity,” ACM T. Knowl. Discov. D., vol. 1, no. 1, pp. 3–es, Mar. 2007.
  21. N. Li, T. Li, and S. Venkatasubramanian, “t-closeness: Privacy beyond k-anonymity and l-diversity,” in Proc. Int. Conf. Data Eng. (ICDE), Istanbul, Turkey, Apr 2007, pp. 106–115.
  22. C. Dwork, “Differential privacy,” in Proc. Int. Conf. Auto. Lang. Program. (ICALP), Berlin, Germany, Jul. 2006, pp. 1–12.
  23. C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” in Proc. Theory of Cryptography Conf. (TCC), New York, USA, Jun 2006, pp. 265–284.
  24. N. Tishby, F. C. Pereira, and W. Bialek, “The information bottleneck method,” in Proc. Annu. Allerton Conf. Commun. Control Comput., Monticello, IL, USA, Oct. 1999, pp. 368–377.
  25. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Montreal, Quebec, Canada, Dec 2014, pp. 2672–2680.
  26. X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial examples: Attacks and defenses for deep learning,” IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 9, pp. 2805–2824, Sep. 2019.
  27. H. Mohaghegh Dolatabadi, S. Erfani, and C. Leckie, “Advflow: Inconspicuous black-box adversarial attacks using normalizing flows,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Vancouver, Canada, Dec. 2020, pp. 15 871–15 884.
  28. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE T. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
  29. B. Poole, S. Ozair, A. Van Den Oord, A. Alemi, and G. Tucker, “On variational bounds of mutual information,” in Proc. Int. Conf. Mach. Learn. (ICML), Long Beach, USA, May 2019, pp. 5171–5180.
  30. A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, “Deep variational information bottleneck,” arXiv preprint arXiv:1612.00410, 2016.
  31. P. Cheng, W. Hao, S. Dai, J. Liu, Z. Gan, and L. Carin, “Club: A contrastive log-ratio upper bound of mutual information,” in Proc. Int. Conf. Mach. Learn. (ICML), Vienna, Austria, Jul 2020, pp. 1779–1788.
  32. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
  33. J. Désidéri, “Multiple-gradient descent algorithm (MGDA) for multiobjective optimization,” C. R. Math., vol. 350, no. 5, pp. 313–318, Mar. 2012.
  34. O. Sener and V. Koltun, “Multi-task learning as multi-objective optimization,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Montreal, Canada, Dec 2018, pp. 525–536.
  35. M. Jaggi, “Revisiting frank-wolfe: Projection-free sparse convex optimization,” in Proc. Int. Conf. Mach. Learn. (ICML), Atlanta, GA, USA, Jun 2013, pp. 427–435.
  36. S. J. Reddi, S. Kale, and S. Kumar, “On the convergence of adam and beyond,” in Proc. Int. Conf. Learn. Represent. (ICLR), Vancouver, Canada, Apr. 2018, pp. 1–23.
  37. X. Chen, S. Liu, R. Sun, and M. Hong, “On the convergence of a class of adam-type algorithms for non-convex optimization,” in Proc. Int. Conf. Learn. Represent. (ICLR), New Orleans, USA, May 2019, pp. 1–30.
  38. K. Choi, K. Tatwawadi, T. Weissman, and S. Ermon, “NECST: Neural joint source-channel coding.” in Proc. Int. Conf. Mach. Learn. (ICML), Long Beach, USA, May 2019, pp. 1182–1192.
  39. X. Luo, Z. Chen, M. Tao, and F. Yang, “Encrypted semantic communication using adversarial training for privacy preserving,” IEEE Commun. Lett., vol. 27, no. 6, pp. 1486–1490, June 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yanhu Wang (9 papers)
  2. Shuaishuai Guo (39 papers)
  3. Yiqin Deng (22 papers)
  4. Haixia Zhang (29 papers)
  5. Yuguang Fang (55 papers)
Citations (14)