Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MultiTrust: A Comprehensive Benchmark Towards Trustworthy Multimodal Large Language Models (2406.07057v2)

Published 11 Jun 2024 in cs.CL, cs.AI, cs.CV, and cs.LG

Abstract: Despite the superior capabilities of Multimodal LLMs (MLLMs) across diverse tasks, they still face significant trustworthiness challenges. Yet, current literature on the assessment of trustworthy MLLMs remains limited, lacking a holistic evaluation to offer thorough insights into future improvements. In this work, we establish MultiTrust, the first comprehensive and unified benchmark on the trustworthiness of MLLMs across five primary aspects: truthfulness, safety, robustness, fairness, and privacy. Our benchmark employs a rigorous evaluation strategy that addresses both multimodal risks and cross-modal impacts, encompassing 32 diverse tasks with self-curated datasets. Extensive experiments with 21 modern MLLMs reveal some previously unexplored trustworthiness issues and risks, highlighting the complexities introduced by the multimodality and underscoring the necessity for advanced methodologies to enhance their reliability. For instance, typical proprietary models still struggle with the perception of visually confusing images and are vulnerable to multimodal jailbreaking and adversarial attacks; MLLMs are more inclined to disclose privacy in text and reveal ideological and cultural biases even when paired with irrelevant images in inference, indicating that the multimodality amplifies the internal risks from base LLMs. Additionally, we release a scalable toolbox for standardized trustworthiness research, aiming to facilitate future advancements in this important field. Code and resources are publicly available at: https://multi-trust.github.io/.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Yichi Zhang (184 papers)
  2. Yao Huang (45 papers)
  3. Yitong Sun (15 papers)
  4. Chang Liu (863 papers)
  5. Zhe Zhao (97 papers)
  6. Zhengwei Fang (8 papers)
  7. Yifan Wang (319 papers)
  8. Huanran Chen (21 papers)
  9. Xiao Yang (158 papers)
  10. Xingxing Wei (60 papers)
  11. Hang Su (224 papers)
  12. Yinpeng Dong (102 papers)
  13. Jun Zhu (424 papers)
Citations (14)