Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CyberSecEval 2: A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models (2404.13161v1)

Published 19 Apr 2024 in cs.CR and cs.LG

Abstract: LLMs introduce new security risks, but there are few comprehensive evaluation suites to measure and reduce these risks. We present BenchmarkName, a novel benchmark to quantify LLM security risks and capabilities. We introduce two new areas for testing: prompt injection and code interpreter abuse. We evaluated multiple state-of-the-art (SOTA) LLMs, including GPT-4, Mistral, Meta Llama 3 70B-Instruct, and Code Llama. Our results show that conditioning away risk of attack remains an unsolved problem; for example, all tested models showed between 26% and 41% successful prompt injection tests. We further introduce the safety-utility tradeoff: conditioning an LLM to reject unsafe prompts can cause the LLM to falsely reject answering benign prompts, which lowers utility. We propose quantifying this tradeoff using False Refusal Rate (FRR). As an illustration, we introduce a novel test set to quantify FRR for cyberattack helpfulness risk. We find many LLMs able to successfully comply with "borderline" benign requests while still rejecting most unsafe requests. Finally, we quantify the utility of LLMs for automating a core cybersecurity task, that of exploiting software vulnerabilities. This is important because the offensive capabilities of LLMs are of intense interest; we quantify this by creating novel test sets for four representative problems. We find that models with coding capabilities perform better than those without, but that further work is needed for LLMs to become proficient at exploit generation. Our code is open source and can be used to evaluate other LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Manish Bhatt (10 papers)
  2. Sahana Chennabasappa (6 papers)
  3. Yue Li (218 papers)
  4. Cyrus Nikolaidis (5 papers)
  5. Daniel Song (6 papers)
  6. Shengye Wan (6 papers)
  7. Faizan Ahmad (4 papers)
  8. Cornelius Aschermann (4 papers)
  9. Yaohui Chen (16 papers)
  10. Dhaval Kapil (3 papers)
  11. David Molnar (4 papers)
  12. Spencer Whitman (5 papers)
  13. Joshua Saxe (15 papers)
Citations (22)