Trustworthy AI-Generative Content in Intelligent 6G Network: Adversarial, Privacy, and Fairness (2405.05930v1)
Abstract: AI-generated content (AIGC) models, represented by LLMs (LLM), have brought revolutionary changes to the content generation fields. The high-speed and extensive 6G technology is an ideal platform for providing powerful AIGC mobile service applications, while future 6G mobile networks also need to support intelligent and personalized mobile generation services. However, the significant ethical and security issues of current AIGC models, such as adversarial attacks, privacy, and fairness, greatly affect the credibility of 6G intelligent networks, especially in ensuring secure, private, and fair AIGC applications. In this paper, we propose TrustGAIN, a novel paradigm for trustworthy AIGC in 6G networks, to ensure trustworthy large-scale AIGC services in future 6G networks. We first discuss the adversarial attacks and privacy threats faced by AIGC systems in 6G networks, as well as the corresponding protection issues. Subsequently, we emphasize the importance of ensuring the unbiasedness and fairness of the mobile generative service in future intelligent networks. In particular, we conduct a use case to demonstrate that TrustGAIN can effectively guide the resistance against malicious or generated false information. We believe that TrustGAIN is a necessary paradigm for intelligent and trustworthy 6G networks to support AIGC services, ensuring the security, privacy, and fairness of AIGC network services.
- J. Yang, H. Jin, R. Tang, X. Han, Q. Feng, H. Jiang, S. Zhong, B. Yin, and X. Hu, “Harnessing the power of llms in practice: A survey on chatgpt and beyond,” ACM Trans. Knowl. Discov. Data, vol. 18, no. 6, apr 2024.
- Y. Chen, R. Li, Z. Zhao, C. Peng, J. Wu, E. Hossain, and H. Zhang, “Netgpt: An ai-native network architecture for provisioning beyond personalized generative services,” IEEE Network, 2024.
- S. Li, X. Lin, H. Xu, K. Hua, X. Jin, G. Li, and J. Li, “Multi-agent rl-based industrial aigc service offloading over wireless edge networks,” 2024.
- A. Wan, E. Wallace, S. Shen, and D. Klein, “Poisoning language models during instruction tuning,” arXiv preprint arXiv:2305.00944, 2023.
- H. Du, D. Niyato, J. Kang, Z. Xiong, K.-Y. Lam, Y. Fang, and Y. Li, “Spear or shield: Leveraging generative ai to tackle security threats of intelligent network services,” 2023.
- X. Fang, S. Che, M. Mao, H. Zhang, M. Zhao, and X. Zhao, “Bias of ai-generated content: An examination of news produced by large language models,” arXiv preprint arXiv:2309.09825, 2023.
- L. N. Tidjon and F. Khomh, “Threat assessment in machine learning based systems,” arXiv preprint arXiv:2207.00091, 2022.
- I. Shumailov, Y. Zhao, D. Bates, N. Papernot, R. Mullins, and R. Anderson, “Sponge examples: Energy-latency attacks on neural networks,” in 2021 IEEE European symposium on security and privacy (EuroS&P). IEEE, 2021, pp. 212–231.
- N. Lukas, A. Salem, R. Sim, S. Tople, L. Wutschitz, and S. Zanella-Béguelin, “Analyzing leakage of personally identifiable information in language models,” in 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023, pp. 346–363.
- J. Wang, H. Du, D. Niyato, J. Kang, Z. Xiong, D. Rajan, S. Mao, and X. Shen, “A unified framework for guiding generative ai with wireless perception in resource constrained mobile edge networks,” IEEE Transactions on Mobile Computing, 2024.
- R. Naik and B. Nushi, “Social biases through the text-to-image generation lens,” arXiv preprint arXiv:2304.06034, 2023.
- M. D’Incà, C. Tzelepis, I. Patras, and N. Sebe, “Improving fairness using vision-language driven image augmentation,” arXiv preprint arXiv:2311.01573, 2023.
- C. T. Teo, M. Abdollahzadeh, and N.-M. Cheung, “Fair generative models via transfer learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 2, 2023, pp. 2429–2437.
- J. Rando, D. Paleka, D. Lindner, L. Heim, and F. Tramèr, “Red-teaming the stable diffusion safety filter,” arXiv preprint arXiv:2210.04610, 2022.
- J. Wang, Z. Yan, J. Lan, E. Bertino, and W. Pedrycz, “Trustguard: Gnn-based robust and explainable trust evaluation with dynamicity support,” IEEE Transactions on Dependable and Secure Computing, pp. 1–18, 2024.
- Siyuan Li (140 papers)
- Xi Lin (135 papers)
- Yaju Liu (3 papers)
- Jianhua Li (38 papers)