Papers
Topics
Authors
Recent
2000 character limit reached

OutSafe-Bench: Unified Safety Benchmark

Updated 20 November 2025
  • OutSafe-Bench is a comprehensive framework that benchmarks safety across multimodal content, 3D point cloud perception, and human-robot interaction.
  • It introduces novel metrics such as MCRS and FairScore, and leverages ensemble methods to quantify risk and uncertainty effectively.
  • The platform supports multilingual content evaluation, detailed sensor failure analysis, and low-cost physical testing for real-time safety applications.

OutSafe-Bench encompasses three distinct research efforts under the same name: (1) a multimodal benchmark for offensive content detection in LLMs, (2) a test suite for out-of-distribution detection in 3D point cloud semantic segmentation, and (3) a cost-effective physical platform for evaluating safe human-robot interaction in mobile robotics. Each OutSafe-Bench instantiation targets a critical aspect of system safety—content moderation, perceptual robustness, or physical collision avoidance—in a domain-specific manner. The following exposition provides an in-depth analysis, organized by domain, covering motivation and contributions, methodologies, evaluation metrics, empirical findings, comparative context, and reproducibility considerations.

1. OutSafe-Bench for Multimodal Content Safety in MLLMs

OutSafe-Bench, as introduced for multimodal LLMs (MLLMs), is the first comprehensive content safety benchmark spanning text, image, audio, and video modalities, with annotations in both Chinese and English, systematically scored across nine risk categories: Privacy & Property, Prejudice & Discrimination, Crime & Illegal Activities, Ethics & Morality, Violence & Hatred, False Information & Misdirection, Political Sensitivity, Physical & Mental Health, and Copyright & Intellectual Property (Yan et al., 13 Nov 2025).

1.1 Motivation and Core Contributions

Existing MLLM safety evaluations lack full multimodal coverage, overlook cross-risk interdependencies, and are rarely bilingual. OutSafe-Bench addresses these gaps by supplying a large, diverse, and richly annotated dataset and by introducing novel scoring and evaluation protocols:

  • Dataset Composition: 18,000 text prompts, 4,500 images, 450 audio clips, 450 videos; balanced across both English and Chinese, with manual and semi-automated validation using keyword matching and MLLMs.
  • Risk Categorization: Each sample is scored on a 0–10 severity scale across all nine risk classes, using scenario- and item-specific definitions.

1.2 Metrics and Evaluation Framework

OutSafe-Bench pioneers two major methodological innovations:

  • Multidimensional Cross Risk Score (MCRS): For an output xx, R(x)=[r1(x),,r9(x)]R(x) = [r_1(x),…,r_9(x)] is its risk vector, and risk aggregation leverages a learned cross-category influence matrix γ\gamma (normalized Sentence-BERT cosine similarities) to encode semantic overlaps. The final score is R(j,k)=q=19γ(k,q)rˉq(j,k)R^{(j,k)} = \sum_{q=1}^9 \gamma_{(k,q)} \bar r_q^{(j,k)} for model MjM_j, scenario kk.
  • FairScore: An automated, weighted aggregation of multi-reviewer (MLLM) risk assessments. Jury models are weighted proportionally to their documented accuracy on held-out datasets, mitigating bias associated with single-judge or uniform-ensemble protocols. The top-5 MLLMs (Claude-3.7, Deepseek-v3, GPT-4o, GPT-4o-mini, Ernie-4.0) serve as adaptive jurors.

Table 1: OutSafe-Bench Modalities and Datasets

Modality #Samples Language Annotation
Text 18,000 EN/CN 9 risk scores
Image 4,500 N/A 9 risk scores
Audio 450 EN/CN 9 risk scores
Video 450 EN/CN 9 risk scores

1.3 Empirical Results and Analysis

  • Model Ranking and Weaknesses: Claude-3.7-Sonnet exhibits the lowest risk (0.7436), Deepseek-v3 follows (0.8130); Qwen-2.5-72B best in fully multimodal (0.9193). Key vulnerabilities were found in video and audio (highest risk ≈1.8–2.4), with systematic weaknesses in “Violence & Hatred” and “False Information”.
  • Cross-lingual Risk: English prompts consistently yield higher risk than Chinese, even in the same models.
  • Failure Modes: Video frame subsampling fails to capture temporal context leading to misclassification; audio input under noise fosters hallucination; cases of cross-modal risk leakage are observed when, for example, safe textual output is paired with unsafe content in linked images.

1.4 Comparative Significance

OutSafe-Bench covers more modalities (4 vs. ≤ 3) and categories (9 vs. ≤ 6) than prior safety benchmarks, enhances annotation depth, and introduces MCRS and FairScore—enabling both multidimensional and jury-weighted risk aggregation. Its findings demonstrate persistent safety vulnerabilities in current MLLMs, especially regarding non-text data.

1.5 Extensions and Open Problems

Proposed future directions include expansion to longer video/multi-turn dialogue, enhanced cross-modal consistency checks, support for additional languages, and integration of large-scale human-in-the-loop evaluation (Yan et al., 13 Nov 2025).

2. OutSafe-Bench for 3D Point Cloud OOD Detection

OutSafe-Bench, as presented in point cloud semantic segmentation, benchmarks out-of-distribution (OOD) detection in LiDAR-based 3D scenes, addressing an underexplored axis of safety-critical perception (Veeramacheneni et al., 2022).

2.1 Dataset Design and OOD Settings

  • Datasets:
    • In-Distribution (ID): Semantic3D (15 outdoor LiDAR scans, 8 classes: C1–C8)
    • OOD Scenarios:
    • Benchmark A: Semantic3D (ID) vs S3DIS (indoor scenes, 13 classes remapped as OOD)
    • Benchmark B: Semantic3D (ID) vs Semantic3D without color (removal of RGB simulates sensor failure)
  • Input Representation: XYZ (normalized) with/without RGB. Points are randomly sampled into fixed-size batches (e.g., 16,384). No projection or aggressive sampling applied.

2.2 OOD Scoring and Model Architectures

  • OOD Scores: At the point level, Maximum Softmax Probability (MSP) and Predictive Entropy are computed by averaging TT softmax outputs—either across MM Deep Ensembles or TT forward passes with Flipout for Bayesian uncertainty. No spatial or scene-level aggregation is performed for OOD scoring.
  • Model Backbones: All methods employ RandLA-Net, combining random sampling, local feature aggregation, and hierarchical encoder-decoder structure. Deep Ensembles (M=20) use independently trained instances; Flipout leverages Bayesian perturbations in the classification layers.

2.3 Evaluation Metrics

  • Quantitative Metrics: Area Under the ROC Curve (AUROC) for point-wise OOD classification; definitions for AUPR and FPR@95%TPR are provided but only AUROC is reported.
  • Empirical Performance (for M=T=20):
Benchmark Method MSP AUROC Entropy AUROC
A: Geometry Ensemble 0.8934 0.8905
A: Geometry Flipout 0.7733 0.7724
B: Color Ensemble 0.7703 0.7758
B: Color Flipout 0.6302 0.6593

Deep Ensembles yield superior OOD separability, particularly when the semantic gap is large (outdoor vs indoor). Color removal (Benchmark B) yields a significant drop in AUROC, validating the high contribution of RGB cues to OOD discrimination.

2.4 Analysis and Practical Considerations

  • Ensemble Superiority: Ensembles capture more diverse epistemic uncertainty versus Flipout, whose variational posterior is typically narrower, underrepresenting multimodality. Performance saturates at \sim10 ensemble members.
  • Failure Modes: Geometry-only OOD (no color) leads to confusion, notably for classes like walls/vegetation. Church edge points and OOD planar surfaces are prominent false positives in Benchmark A.
  • Deployment Implications: Per-point thresholds can be calibrated for stringent false positive rates, offering concrete trade-offs for safety-critical pipelines (Veeramacheneni et al., 2022).

2.5 Recommendations and Future Work

Extensions propose testing density-based OOD, adversarial disturbances, lightweight uncertainty quantification, scene-level aggregation, and more comprehensive risk metrics (AUPR, FPR@95%TPR).

3. OutSafe-Bench Physical Platform for Human-Robot Interaction

In the robotics domain, OutSafe-Bench refers to a low-cost (≈$150) modular test platform comprised of a 3-wheel omnidirectional mobile base, 2-DoF arm, Arduino Mega 2560, and a hex-array of HC-SR04 ultrasonic sensors. It is purpose-built for rapid evaluation of collision-avoidance algorithms in human-robot interaction (HRI) (Fereydooni et al., 2023).

3.1 Hardware Configuration

  • Sensor Pack: Six ultrasonic sensors, spaced at 60° intervals, enable 360° proximity coverage (measurement range 2–400 cm, beam ~30°, 0.3 cm resolution).
  • Actuator Modules:
    • Omnidirectional base: 3 DC motors, 3 wheels (120° separation), L293D dual-H-bridge drivers.
    • 2-DoF arm: two Mg995 servos for yaw/pitch; arm length ~30 cm.
  • Processing Unit: Arduino Mega 2560 (54 digital I/O, 16 analog in), implements control at 50 Hz.
  • Connectivity: Direct TTL-level signals. No ROS middleware.

3.2 Software Control and Algorithms

  • Control Loop: All six sensors are read; if any report distance < SafeArea (0.5 m), collision avoidance routines are triggered.
  • Algorithms:
    • Algorithm 1 (Arm-First): Arm swings away from threat direction; if unresolved, base retreats.
    • Algorithm 2 (Base-First): Base retreats immediately; arm swings only if subsequent proximity persists.

3.3 Evaluation Protocols and Metrics

  • Setup: Flat lab environment, human approaches along cardinal orientations. All actions (distance logs, servo angles, 2D paths) are recorded and analyzed.
  • Quantitative Metrics:
    • Minimum separation distance dmind_{min}.
    • Reaction time trt_r (event to actuator initiation).
    • Number of collisions.
  • Examples: Arm priority yields local avoidance at ≈80 ms; base priority, ≈200 ms. No collisions; minor deviations (±\pm5 cm) recorded, attributed to actuation latency and beam occlusion.

3.4 System Characteristics and Limitations

  • Strengths:
    • Economic and modular; rapid prototyping.
    • Achieves real-time control at ≈50 Hz.
  • Limitations:
    • Ultrasonic coverage incomplete at certain angles due to sensor beamwidth.
    • Absence of ROS/no state machine integration.
    • Planar, line-of-sight scenarios only.

3.5 Reproducibility and Extension Prospects

Mechanical, electrical, and software specifications enable platform replication. Proposed improvements include LiDAR/depth camera integration, ROS interoperability, force-torque sensing, and open-sourcing design files (Fereydooni et al., 2023).

4. Comparative Analysis and Broader Context

Each instantiation of OutSafe-Bench targets a unique class of safety challenge—content moderation at inference (MLLMs), sensor-level robustness for OOD perception (LiDAR), and embodied physical safety (robotics). All exemplify comprehensive experimental design, provision of reproducible benchmarks, and transparent performance reporting.

  • MLLM OutSafe-Bench prioritizes cross-modal, cross-lingual, and cross-risk evaluation using automated, fairness-aware aggregation strategies (Yan et al., 13 Nov 2025).
  • Point Cloud OutSafe-Bench focuses on per-point risk estimation, ensemble-based uncertainty, and offers scenario-based realism (geometry vs sensor failure) (Veeramacheneni et al., 2022).
  • Robot OutSafe-Bench centers rapid, low-cost, physical validation of collision avoidance strategies, exposing practical engineering constraints (Fereydooni et al., 2023).

5. Conclusions and Outlook

OutSafe-Bench, in all its forms, establishes new standards for safety evaluation in AI, perception, and robotics. Its frameworks integrate rigorous dataset design, quantitative metrics, and detailed analysis protocols. Persistent vulnerabilities—be they in MLLM outputs under realistic, multimodal risk; in LiDAR perception under domain or sensor drift; or in near-collision HRI events—underscore the necessity of multidimensional, scenario-driven safety research. Future efforts will likely extend OutSafe-Bench with further modalities (e.g., longer-form video, wider linguistic coverage, adversarial scenarios), more granular risk aggregation, and broader community reproducibility assets.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to OutSafe-Bench.