Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models (2311.17394v1)

Published 29 Nov 2023 in cs.CR

Abstract: With the advent of sophisticated AI technologies, the proliferation of deepfakes and the spread of m/disinformation have emerged as formidable threats to the integrity of information ecosystems worldwide. This paper provides an overview of the current literature. Within the frontier AI's crucial application in developing defense mechanisms for detecting deepfakes, we highlight the mechanisms through which generative AI based on large models (LM-based GenAI) craft seemingly convincing yet fabricated contents. We explore the multifaceted implications of LM-based GenAI on society, politics, and individual privacy violations, underscoring the urgent need for robust defense strategies. To address these challenges, in this study, we introduce an integrated framework that combines advanced detection algorithms, cross-platform collaboration, and policy-driven initiatives to mitigate the risks associated with AI-Generated Content (AIGC). By leveraging multi-modal analysis, digital watermarking, and machine learning-based authentication techniques, we propose a defense mechanism adaptable to AI capabilities of ever-evolving nature. Furthermore, the paper advocates for a global consensus on the ethical usage of GenAI and implementing cyber-wellness educational programs to enhance public awareness and resilience against m/disinformation. Our findings suggest that a proactive and collaborative approach involving technological innovation and regulatory oversight is essential for safeguarding netizens while interacting with cyberspace against the insidious effects of deepfakes and GenAI-enabled m/disinformation campaigns.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. M. Anderljung, J. Barnhart, J. Leung, A. Korinek, C. O’Keefe, J. Whittlestone, S. Avin, M. Brundage, J. Bullock, D. Cass-Beggs et al., “Frontier ai regulation: Managing emerging risks to public safety,” arXiv preprint arXiv:2307.03718, 2023.
  2. Y. Cao, S. Li, Y. Liu, Z. Yan, Y. Dai, P. S. Yu, and L. Sun, “A comprehensive survey of AI-generated content (AIGC): A history of generative AI from GAN to ChatGPT,” arXiv preprint arXiv:2303.04226, 2023.
  3. Y. Mirsky and W. Lee, “The creation and detection of deepfakes: A survey,” ACM Computing Surveys (CSUR), vol. 54, no. 1, pp. 1–41, 2021.
  4. W. Shin and M. O. Lwin, “Parental mediation of children’s digital media use in high digital penetration countries: perspectives from singapore and australia,” Asian Journal of Communication, vol. 32, no. 4, pp. 309–326, 2022.
  5. V. Danry, J. Leong, P. Pataranutaporn, P. Tandon, Y. Liu, R. Shilkrot, P. Punpongsanon, T. Weissman, P. Maes, and M. Sra, “AI-generated characters: putting deepfakes to good use,” in CHI Conference on Human Factors in Computing Systems Extended Abstracts, 2022, pp. 1–5.
  6. N. Amezaga and J. Hajek, “Availability of voice deepfake technology and its impact for good and evil,” in Proceedings of the 23rd Annual Conference on Information Technology Education, 2022, pp. 23–28.
  7. J. Fletcher, “Deepfakes, artificial intelligence, and some kind of dystopia: The new faces of online post-fact performance,” Theatre Journal, vol. 70, no. 4, pp. 455–471, 2018.
  8. R. W. Zmud, “Opportunities for strategic information manipulation through new information technology,” Organizations and Communication Technology, pp. 95–116, 1990.
  9. D. Silverman, K. Kaltenthaler, and M. Dagher, “Seeing is disbelieving: the depths and limits of factual misinformation in war,” International Studies Quarterly, vol. 65, no. 3, pp. 798–810, 2021.
  10. M. T. Ahvanooey, Q. Li, X. Zhu, M. Alazab, and J. Zhang, “ANiTW: A novel intelligent text watermarking technique for forensic identification of spurious information on social media,” Computers & Security, vol. 90, p. 101702, 2020.
  11. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems, vol. 27, 2014.
  12. J. Zhou, Y. Zhang, Q. Luo, A. G. Parker, and M. De Choudhury, “Synthetic lies: Understanding AI-generated misinformation and evaluating algorithmic and human solutions,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023, pp. 1–20.
  13. B. He, Y. Hu, Y. Lee, S. Oh, G. Verma, and S. Kumar, “A survey on the role of crowds in combating online misinformation: Annotators, evaluators, and creators,” arXiv, 2023, accessed November 21, 2023.
  14. S. Siwakoti, J. N. Shapiro, and N. Evans, “Less reliable media drive interest in anti-vaccine information,” Harvard Kennedy School Misinformation Review, 2023.
  15. M. U. Hadi, R. Qureshi, A. Shah, M. Irfan, A. Zafar, M. B. Shaikh, N. Akhtar, J. Wu, S. Mirjalili et al., “Large language models: a comprehensive survey of its applications, challenges, limitations, and future prospects,” 2023.
  16. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901, 2020.
  17. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  18. S. Biderman, H. Schoelkopf, Q. G. Anthony, H. Bradley, K. O’Brien, E. Hallahan, M. A. Khan, S. Purohit, U. S. Prashanth, E. Raff et al., “Pythia: A suite for analyzing large language models across training and scaling,” in International Conference on Machine Learning.   PMLR, 2023, pp. 2397–2430.
  19. G. Xiao, J. Lin, M. Seznec, H. Wu, J. Demouth, and S. Han, “Smoothquant: Accurate and efficient post-training quantization for large language models,” in International Conference on Machine Learning.   PMLR, 2023, pp. 38 087–38 099.
  20. D. Xu, S. Fan, and M. Kankanhalli, “Combating misinformation in the era of generative AI models,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 9291–9298.
  21. S. A. Bin-Nashwan, M. Sadallah, and M. Bouteraa, “Use of chatgpt in academia: Academic integrity hangs in the balance,” Technology in Society, vol. 75, p. 102370, 2023.
  22. Y. Mirsky, A. Demontis, J. Kotak, R. Shankar, D. Gelei, L. Yang, X. Zhang, M. Pintor, W. Lee, Y. Elovici et al., “The threat of offensive ai to organizations,” Computers & Security, vol. 124, p. 103006, 2023.
  23. M. Mustak, J. Salminen, M. Mäntymäki, A. Rahman, and Y. K. Dwivedi, “Deepfakes: Deceptions, mitigations, and opportunities,” Journal of Business Research, vol. 154, p. 113368, 2023.
  24. S. Gregory, “Fortify the truth: How to defend human rights in an age of deepfakes and generative ai,” p. huad035, 2023.
  25. K. J. Schiff, D. S. Schiff, and N. Bueno, “The liar’s dividend: The impact of deepfakes and fake news on trust in political discourse,” 2023.
  26. U. A. Ciftci, G. Yuksek, and I. Demir, “My face my choice: Privacy enhancing deepfakes for social media anonymization,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023, pp. 1369–1379.
  27. K. Wahl-Jorgensen and M. Carlson, “Conjecturing fearful futures: journalistic discourses on deepfakes,” Journalism Practice, vol. 15, no. 6, pp. 803–820, 2021.
  28. M. Pawelec, “Deepfakes and democracy (theory): how synthetic audio-visual media for disinformation and hate speech threaten core democratic functions,” Digital Society, vol. 1, no. 2, p. 19, 2022.
  29. C. Öhman, “Introducing the pervert’s dilemma: a contribution to the critique of deepfake pornography,” Ethics and Information Technology, vol. 22, no. 2, pp. 133–140, 2020.
  30. Y. Wang, “Synthetic realities in the digital age: Navigating the opportunities and challenges of ai-generated content,” 2023.
  31. C. Campbell, K. Plangger, S. Sands, and J. Kietzmann, “Preparing for an era of deepfakes and ai-generated ads: A framework for understanding responses to manipulated advertising,” Journal of Advertising, vol. 51, no. 1, pp. 22–38, 2022.
  32. A. Mitra, S. P. Mohanty, P. Corcoran, and E. Kougianos, “A machine learning based approach for deepfake detection in social media through key video frame extraction,” SN Computer Science, vol. 2, pp. 1–18, 2021.
  33. H. Zhao, W. Zhou, D. Chen, T. Wei, W. Zhang, and N. Yu, “Multi-attentional deepfake detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2185–2194.
  34. M. Masood, M. Nawaz, K. M. Malik, A. Javed, A. Irtaza, and H. Malik, “Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward,” Applied Intelligence, vol. 53, no. 4, pp. 3974–4026, 2023.
  35. D. Krishna, “Deepfakes, online platforms, and a novel proposal for transparency, collaboration, and education,” Rich. JL & Tech., vol. 27, p. 1, 2020.
  36. J. T. Hancock and J. N. Bailenson, “The social impact of deepfakes,” pp. 149–152, 2021.
  37. A. Ünver, “Emerging technologies and automated fact-checking: Tools, techniques and algorithms,” Techniques and Algorithms (August 29, 2023), 2023.
  38. P. Gupta, K. Chugh, A. Dhall, and R. Subramanian, “The eyes know it: Fakeet-an eye-tracking database to understand deepfake perception,” in Proceedings of the 2020 International Conference on Multimodal Interaction, 2020, pp. 519–527.
  39. K. Kikerpill, A. Siibak, and S. Valli, “Dealing with deepfakes: Reddit, online content moderation, and situational crime prevention,” in Theorizing Criminality and Policing in the Digital Media Age.   Emerald Publishing Limited, 2021, vol. 20, pp. 25–45.
  40. K. Shiohara and T. Yamasaki, “Detecting deepfakes with self-blended images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 18 720–18 729.
  41. N. Diakopoulos and D. Johnson, “Anticipating and addressing the ethical implications of deepfakes in the context of elections,” New Media & Society, vol. 23, no. 7, pp. 2072–2098, 2021.
  42. B. Giovanola and S. Tiribelli, “Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms,” AI & Society, vol. 38, no. 2, pp. 549–563, 2023.
  43. U. Ehsan, Q. V. Liao, M. Muller, M. O. Riedl, and J. D. Weisz, “Expanding explainability: Towards social transparency in AI systems,” in ACM CHI Conference on Human Factors in Computing Systems, 2021, pp. 1–19.
  44. A. Henriksen, S. Enni, and A. Bechmann, “Situated accountability: Ethical principles, certification standards, and explanation methods in applied AI,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 574–585.
  45. H.-C. Yang, A. R. Rahmanti, C.-W. Huang, and Y.-C. J. Li, “How can research on artificial empathy be enhanced by applying deepfakes?” Journal of Medical Internet Research, vol. 24, no. 3, p. e29506, 2022.
  46. D. Kelly and J. Burkell, “It’s not (all) about the information: The role of cognition in creating and sustaining false beliefs,” Cambridge Studies on Governing Knowledge Commons, 2024.
  47. P. T. Jaeger and N. G. Taylor, “Arsenals of lifelong information literacy: Educating users to navigate political and current events information in world of ever-evolving misinformation,” The Library Quarterly, vol. 91, no. 1, pp. 19–31, 2021.
  48. Y. Liu, H. Du, D. Niyato, J. Kang, Z. Xiong, C. Miao, and A. Jamalipour, “Blockchain-empowered lifecycle management for AI-generated content (AIGC) products in edge networks,” arXiv preprint arXiv:2303.02836, 2023.
  49. N. Kshetri, “The economics of deepfakes,” Computer, vol. 56, no. 8, pp. 89–94, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Mohamed R. Shoaib (5 papers)
  2. Zefan Wang (13 papers)
  3. Milad Taleby Ahvanooey (6 papers)
  4. Jun Zhao (469 papers)
Citations (24)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com