Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

More Distinctively Black and Feminine Faces Lead to Increased Stereotyping in Vision-Language Models (2407.06194v2)

Published 22 May 2024 in cs.CV, cs.AI, and cs.CL

Abstract: Vision LLMs (VLMs), exemplified by GPT-4V, adeptly integrate text and vision modalities. This integration enhances LLMs' ability to mimic human perception, allowing them to process image inputs. Despite VLMs' advanced capabilities, however, there is a concern that VLMs inherit biases of both modalities in ways that make biases more pervasive and difficult to mitigate. Our study explores how VLMs perpetuate homogeneity bias and trait associations with regards to race and gender. When prompted to write stories based on images of human faces, GPT-4V describes subordinate racial and gender groups with greater homogeneity than dominant groups and relies on distinct, yet generally positive, stereotypes. Importantly, VLM stereotyping is driven by visual cues rather than group membership alone such that faces that are rated as more prototypically Black and feminine are subject to greater stereotyping. These findings suggest that VLMs may associate subtle visual cues related to racial and gender groups with stereotypes in ways that could be challenging to mitigate. We explore the underlying reasons behind this behavior and discuss its implications and emphasize the importance of addressing these biases as VLMs come to mirror human perception.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Messi H. J. Lee (8 papers)
  2. Jacob M. Montgomery (4 papers)
  3. Calvin K. Lai (5 papers)