Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Model Safety: A Holistic Survey (2412.17686v1)

Published 23 Dec 2024 in cs.AI and cs.CL

Abstract: The rapid development and deployment of LLMs have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Dan Shi (4 papers)
  2. Tianhao Shen (15 papers)
  3. Yufei Huang (81 papers)
  4. Zhigen Li (3 papers)
  5. Yongqi Leng (4 papers)
  6. Renren Jin (17 papers)
  7. Chuang Liu (71 papers)
  8. Xinwei Wu (9 papers)
  9. Zishan Guo (5 papers)
  10. Linhao Yu (10 papers)
  11. Ling Shi (119 papers)
  12. Bojian Jiang (2 papers)
  13. Deyi Xiong (103 papers)