Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FLEX: A Benchmark for Evaluating Robustness of Fairness in Large Language Models (2503.19540v1)

Published 25 Mar 2025 in cs.CL and cs.AI

Abstract: Recent advancements in LLMs have significantly enhanced interactions between users and models. These advancements concurrently underscore the need for rigorous safety evaluations due to the manifestation of social biases, which can lead to harmful societal impacts. Despite these concerns, existing benchmarks may overlook the intrinsic weaknesses of LLMs, which can generate biased responses even with simple adversarial instructions. To address this critical gap, we introduce a new benchmark, Fairness Benchmark in LLM under Extreme Scenarios (FLEX), designed to test whether LLMs can sustain fairness even when exposed to prompts constructed to induce bias. To thoroughly evaluate the robustness of LLMs, we integrate prompts that amplify potential biases into the fairness assessment. Comparative experiments between FLEX and existing benchmarks demonstrate that traditional evaluations may underestimate the inherent risks in models. This highlights the need for more stringent LLM evaluation benchmarks to guarantee safety and fairness.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Dahyun Jung (4 papers)
  2. Seungyoon Lee (5 papers)
  3. Hyeonseok Moon (20 papers)
  4. Chanjun Park (49 papers)
  5. Heuiseok Lim (49 papers)

Summary

We haven't generated a summary for this paper yet.