Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Killer Apps: Low-Speed, Large-Scale AI Weapons (2402.01663v4)

Published 14 Jan 2024 in cs.CY, cs.CR, and cs.LG

Abstract: The accelerating advancements in AI and Machine Learning (ML), highlighted by the development of cutting-edge Generative Pre-trained Transformer (GPT) models by organizations such as OpenAI, Meta, and Anthropic, present new challenges and opportunities in warfare and security. Much of the current focus is on AI's integration within weapons systems and its role in rapid decision-making in kinetic conflict. However, an equally important but often overlooked aspect is the potential of AI-based psychological manipulation at internet scales within the information domain. These capabilities could pose significant threats to individuals, organizations, and societies globally. This paper explores the concept of AI weapons, their deployment, detection, and potential countermeasures.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. N. Jones, How to stop ai deepfakes from sinking society-and science, Nature 621 (2023) 676–679.
  2. Exploring the dark side of ai: Advanced phishing attack design and deployment using chatgpt, in: 2023 IEEE Conference on Communications and Network Security (CNS), IEEE, 2023, pp. 1–6.
  3. C. R. Sunstein, The ethics of nudging, Yale J. on Reg. 32 (2015) 413.
  4. I. Yablokov, Russian disinformation finds fertile ground in the west, Nature Human Behaviour 6 (2022) 766–767.
  5. On the dangers of stochastic parrots: Can language models be too big?, in: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, 2021, pp. 610–623.
  6. An overview of catastrophic ai risks, arXiv preprint arXiv:2306.12001 (2023).
  7. N. Bostrom, Ethical issues in advanced artificial intelligence, Science fiction and philosophy: from time travel to superintelligence 277 (2003) 284.
  8. K. McGuffie, A. Newhouse, The radicalization risks of gpt-3 and advanced neural language models, arXiv preprint arXiv:2009.06807 (2020).
  9. User experiences of social support from companion chatbots in everyday contexts: thematic analysis, Journal of medical Internet research 22 (2020) e16235.
  10. P. R. Shaver, M. Mikulincer, An overview of adult attachment theory, Attachment theory and research in clinical work with adults (2009) 17–45.
  11. The psychological and political correlates of conspiracy theory beliefs, Scientific reports 12 (2022) 21672.
  12. Boiling the frog slowly: The immersion of c-suite financial executives into fraud, Journal of Business Ethics 162 (2020) 645–673.
  13. Trapping llm hallucinations using tagged context prompts, arXiv preprint arXiv:2306.06085 (2023).
  14. Navigating in complex business processes, in: International Conference on Database and Expert Systems Applications, Springer, 2012, pp. 466–480.
  15. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface, arXiv preprint arXiv:2303.17580 (2023).
  16. Toolformer: Language models can teach themselves to use tools, arXiv preprint arXiv:2302.04761 (2023).
  17. A. Balakrishnan, C. Schulze, Code obfuscation literature survey, CS701 Construction of compilers 19 (2005) 31.
  18. Large language models for software engineering: Survey and open problems, arXiv preprint arXiv:2310.03533 (2023).
  19. D. G. Feitelson, From code complexity metrics to program comprehension, Communications of the ACM 66 (2023) 52–61.
Citations (1)

Summary

We haven't generated a summary for this paper yet.