Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Agent-SafetyBench: Evaluating the Safety of LLM Agents (2412.14470v1)

Published 19 Dec 2024 in cs.CL

Abstract: As LLMs are increasingly deployed as agents, their integration into interactive environments and tool use introduce new safety challenges beyond those associated with the models themselves. However, the absence of comprehensive benchmarks for evaluating agent safety presents a significant barrier to effective assessment and further improvement. In this paper, we introduce Agent-SafetyBench, a comprehensive benchmark designed to evaluate the safety of LLM agents. Agent-SafetyBench encompasses 349 interaction environments and 2,000 test cases, evaluating 8 categories of safety risks and covering 10 common failure modes frequently encountered in unsafe interactions. Our evaluation of 16 popular LLM agents reveals a concerning result: none of the agents achieves a safety score above 60%. This highlights significant safety challenges in LLM agents and underscores the considerable need for improvement. Through quantitative analysis, we identify critical failure modes and summarize two fundamental safety detects in current LLM agents: lack of robustness and lack of risk awareness. Furthermore, our findings suggest that reliance on defense prompts alone is insufficient to address these safety issues, emphasizing the need for more advanced and robust strategies. We release Agent-SafetyBench at \url{https://github.com/thu-coai/Agent-SafetyBench} to facilitate further research and innovation in agent safety evaluation and improvement.

Agent-SafetyBench: Evaluating the Safety of LLM Agents

In response to the growing deployment of LLMs as agents, the paper presented by Zhexin Zhang et al. introduces Agent-SafetyBench, a comprehensive benchmark for evaluating the safety of LLM agents within interactive environments. This research addresses the increasingly critical issue of agent safety, which extends beyond the textual content outputs of LLMs to their operational behaviors in complex and interactive settings. The authors articulate the necessity of a robust framework like Agent-SafetyBench to systematically assess the diverse safety risks associated with LLM agents.

Key Contributions and Methodology

The core contribution of the paper is the development of Agent-SafetyBench, characterized by:

  • Diverse Interaction Environments: With 349 varied environments, Agent-SafetyBench significantly surpasses previous efforts. This diversity supports the simulation of a wide range of real-world scenarios where LLM agents might interact unsafely.
  • Comprehensive Risk Coverage: The benchmark identifies 8 primary safety risk categories, such as data leakage and property loss, derived from empirical observations and extant literature. These categories reflect the multifaceted nature of agent safety concerns.
  • Extensive Test Cases: The dataset incorporates 2,000 test cases across numerous failure modes, offering a rich resource for detecting unsafe interactions.

To ensure accurate assessment, the authors have finetuned a judging model based on 4,000 manually annotated samples, improving evaluation precision over baseline models such as GPT-4.

Empirical Findings

Evaluations performed on 16 popular LLM agents, including both proprietary and open-source options, reveal significant safety deficiencies. Notably, no tested agents exceed a safety score of 60%, indicating substantial room for improvement. The paper highlights two fundamental inadequacies in current LLM agents:

  1. Lack of Robustness: Many agents struggle with reliably invoking tools and consistently managing tasks across varied contexts.
  2. Insufficient Risk Awareness: Current models often lack the foresight to identify and mitigate potential risks tied to specific tools and interactions.

Furthermore, attempts to enhance agent safety through additional defense prompts showed limited success, particularly for models with inherently robust capabilities.

Implications and Future Directions

The findings underscore a vital need for further research into improving the safety of LLM agents. The results advocate for advancements beyond prompt engineering, suggesting potential in model finetuning or structural improvements in LLM architecture to support both robustness and risk-aware behavior. The release of Agent-SafetyBench for public use aims to catalyze progress in this domain, establishing a foundational resource for researchers and developers dedicated to strengthening the safe application of LLM agents.

By furnishing a standardized mechanism for assessing agent safety, this paper not only illuminates current vulnerabilities but also paves the way for methodical advancements in the field of AI safety evaluation. This research holds the potential to impact future AI model deployment strategies, fostering safer and more reliable intelligent systems in real-world applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zhexin Zhang (26 papers)
  2. Shiyao Cui (27 papers)
  3. Yida Lu (10 papers)
  4. Jingzhuo Zhou (1 paper)
  5. Junxiao Yang (9 papers)
  6. Hongning Wang (107 papers)
  7. Minlie Huang (225 papers)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com