Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Does Human Collaboration Enhance the Accuracy of Identifying LLM-Generated Deepfake Texts? (2304.01002v3)

Published 3 Apr 2023 in cs.CL, cs.AI, and cs.HC

Abstract: Advances in LLMs (e.g., GPT-4, LLaMA) have improved the generation of coherent sentences resembling human writing on a large scale, resulting in the creation of so-called deepfake texts. However, this progress poses security and privacy concerns, necessitating effective solutions for distinguishing deepfake texts from human-written ones. Although prior works studied humans' ability to detect deepfake texts, none has examined whether "collaboration" among humans improves the detection of deepfake texts. In this study, to address this gap of understanding on deepfake texts, we conducted experiments with two groups: (1) nonexpert individuals from the AMT platform and (2) writing experts from the Upwork platform. The results demonstrate that collaboration among humans can potentially improve the detection of deepfake texts for both groups, increasing detection accuracies by 6.36% for non-experts and 12.76% for experts, respectively, compared to individuals' detection accuracies. We further analyze the explanations that humans used for detecting a piece of text as deepfake text, and find that the strongest indicator of deepfake texts is their lack of coherence and consistency. Our study provides useful insights for future tools and framework designs to facilitate the collaborative human detection of deepfake texts. The experiment datasets and AMT implementations are available at: https://github.com/huashen218/LLM-deepfake-human-study.git

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. Whodunit? Learning to Contrast for Authorship Attribution. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, 1142–1157.
  2. Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures. In 2022 IEEE Symposium on Security and Privacy (SP), 769–786. IEEE.
  3. Real or fake? learning to discriminate machine from human generated text. arXiv preprint arXiv:1906.03351.
  4. Tracking changes in collaborative writing: edits, visibility and group maintenance. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, 809–818.
  5. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877–1901.
  6. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
  7. All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 7282–7296. Online: Association for Computational Linguistics.
  8. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  9. Enabling Language Models to Fill in the Blanks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2492–2501.
  10. Is GPT-3 Text Indistinguishable from Human Text? Scarecrow: A Framework for Scrutinizing Machine Text. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 7250–7274.
  11. RoFT: A Tool for Evaluating Human Detection of Machine-Generated Text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 189–196.
  12. TweepFake: About detecting deepfake tweets. Plos one, 16(5): e0251415.
  13. Feature-based detection of automated language models: tackling GPT-2, GPT-3 and Grover. PeerJ Computer Science, 7: e443.
  14. Unsupervised and Distributional Detection of Machine-Generated Text. arXiv preprint arXiv:2111.02878.
  15. GLTR: Statistical Detection and Visualization of Generated Text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 111–116.
  16. Gerald, B. 2018. A brief review of independent, dependent and one sample t-test. International Journal of Applied Mathematics and Theoretical Physics, 4(2): 50–54.
  17. A framework for assessing the role of public service media organizations in countering disinformation. Digital journalism, 10(5): 843–865.
  18. Huggingface. 2023. GPT-2 Output Detector Demo. https://huggingface.co/openai-detector/. Accessed: 2023-09-18.
  19. Automatic Detection of Generated Text is Easiest when Humans are Fooled. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 1808–1822. Online: Association for Computational Linguistics.
  20. Automatic Detection of Entity-Manipulated Text using Factual Knowledge. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 86–93.
  21. Human and Technological Infrastructures of Fact-Checking. Proc. ACM Hum.-Comput. Interact., 6(CSCW2).
  22. Artificial Text Detection via Examining the Topology of Attention Maps. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 635–649.
  23. Do language models plagiarize? In Proceedings of the ACM Web Conference 2023, 3637–3647.
  24. Liu, I. J. 2018. CekFakta: Collaborative Fact-Checking in Indonesias. https://blog.google/outreach-initiatives/google-news-initiative/cekfakta-collaborative-fact-checking-indonesia/. Accessed: 2023-09-18.
  25. CoCo: Coherence-Enhanced Machine-Generated Text Detection Under Data Limitation With Contrastive Learning. arXiv preprint arXiv:2212.10341.
  26. Mabrito, M. 2006. A study of synchronous versus asynchronous collaboration in an online business writing class. The American Journal of Distance Education, 20(2): 93–107.
  27. Why do humans reason? Arguments for an argumentative theory. Behavioral and brain sciences, 34(2): 57–74.
  28. Detectgpt: Zero-shot machine-generated text detection using probability curvature. arXiv preprint arXiv:2301.11305.
  29. OpenAI. 2023. GPT-4 Technical Report. ArXiv, abs/2303.08774.
  30. MAUVE Scores for Generative Models: Theory and Practice. arXiv preprint arXiv:2212.14578.
  31. An information divergence measure between neural text and human text. arXiv preprint arXiv:2102.01454.
  32. Deepfake Text Detection: Limitations and Opportunities. 44th IEEE Symposium on Security and Privacy.
  33. Language models are unsupervised multitask learners. OpenAI blog, 1(8): 9.
  34. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1): 5485–5551.
  35. Trust it or not: Effects of machine-learning warnings in helping individuals mitigate misinformation. In Proceedings of the 10th ACM Conference on Web Science, 265–274.
  36. Parachute: Evaluating interactive human-lm co-writing systems. arXiv preprint arXiv:2303.06333.
  37. Model evaluation for extreme risks. arXiv preprint arXiv:2305.15324.
  38. Task and technology fit: a comparison of two technologies for synchronous and asynchronous group communication. Information & management, 36(3): 139–150.
  39. Surowiecki, J. 2005. The wisdom of crowds. Anchor.
  40. Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2081–2106.
  41. Attribution and Obfuscation of Neural Text Authorship: A Data Mining Perspective. SIGKDD Explorations, vol. 25.
  42. Authorship attribution for neural text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 8384–8395.
  43. TURINGBENCH: A Benchmark Environment for Turing Test in the Age of Neural Text Generation. In Findings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2001–2016.
  44. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation, 355–368.
  45. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.
  46. ScatterShot: Interactive In-context Example Curation for Text Transformation. In Proceedings of the 28th International Conference on Intelligent User Interfaces, 353–367.
  47. Defending against neural fake news. Advances in neural information processing systems, 32.
  48. Zhang, T. 2022. Deepfake generation and detection, a survey. Multimedia Tools and Applications, 81(5): 6259–6276.
  49. Truth inference in crowdsourcing: Is the problem solved? Proceedings of the VLDB Endowment, 10(5): 541–552.
  50. Neural Deepfake Detection with Factual Structure of Text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2461–2470.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Adaku Uchendu (16 papers)
  2. Jooyoung Lee (48 papers)
  3. Hua Shen (32 papers)
  4. Thai Le (38 papers)
  5. Ting-Hao 'Kenneth' Huang (42 papers)
  6. Dongwon Lee (65 papers)
Citations (23)