Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Geographic Inclusion in the Evaluation of Text-to-Image Models (2405.04457v1)

Published 7 May 2024 in cs.CV, cs.CY, and cs.HC

Abstract: Rapid progress in text-to-image generative models coupled with their deployment for visual content creation has magnified the importance of thoroughly evaluating their performance and identifying potential biases. In pursuit of models that generate images that are realistic, diverse, visually appealing, and consistent with the given prompt, researchers and practitioners often turn to automated metrics to facilitate scalable and cost-effective performance profiling. However, commonly-used metrics often fail to account for the full diversity of human preference; often even in-depth human evaluations face challenges with subjectivity, especially as interpretations of evaluation criteria vary across regions and cultures. In this work, we conduct a large, cross-cultural study to study how much annotators in Africa, Europe, and Southeast Asia vary in their perception of geographic representation, visual appeal, and consistency in real and generated images from state-of-the art public APIs. We collect over 65,000 image annotations and 20 survey responses. We contrast human annotations with common automated metrics, finding that human preferences vary notably across geographic location and that current metrics do not fully account for this diversity. For example, annotators in different locations often disagree on whether exaggerated, stereotypical depictions of a region are considered geographically representative. In addition, the utility of automatic evaluations is dependent on assumptions about their set-up, such as the alignment of feature extractors with human perception of object similarity or the definition of "appeal" captured in reference datasets used to ground evaluations. We recommend steps for improved automatic and human evaluations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (57)
  1. Epsilon kú mask: Integrating Yorùbá cultural greetings into machine translation. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP). Association for Computational Linguistics, Dubrovnik, Croatia, 1–7. https://doi.org/10.18653/v1/2023.c3nlp-1.1
  2. Identifying and Measuring Annotator Bias Based on Annotators’ Demographic Characteristics. In Proceedings of the Fourth Workshop on Online Abuse and Harms, Seyi Akiwowo, Bertie Vidgen, Vinodkumar Prabhakaran, and Zeerak Waseem (Eds.). Association for Computational Linguistics, Online, 184–190. https://doi.org/10.18653/v1/2020.alw-1.21
  3. Probing Pre-Trained Language Models for Cross-Cultural Differences in Values. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), Sunipa Dev, Vinodkumar Prabhakaran, David Adelani, Dirk Hovy, and Luciana Benotti (Eds.). Association for Computational Linguistics, Dubrovnik, Croatia, 114–130. https://doi.org/10.18653/v1/2023.c3nlp-1.12
  4. How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions?. In EMNLP (Short).
  5. Inspecting the Geographical Representativeness of Images from Text-to-Image Models. arXiv preprint arXiv:2305.11080 (2023).
  6. Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale. arXiv:2211.03759 [cs.CL]
  7. Concreteness ratings for 40 thousand generally known English word lemmas. Behavior research methods 46, 3 (2014), 904–911.
  8. Instance-conditioned gan. Advances in Neural Information Processing Systems 34 (2021), 27517–27529.
  9. DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers. (2022). arXiv:2202.04053 [cs.CV]
  10. Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack. arXiv:2309.15807 [cs.CV]
  11. Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations. CoRR abs/2110.05719 (2021). arXiv:2110.05719 https://arxiv.org/abs/2110.05719
  12. Does object recognition work for everyone?. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 52–59.
  13. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. 248–255. https://doi.org/10.1109/CVPR.2009.5206848
  14. Building Socio-culturally Inclusive Stereotype Resources with Community Engagement. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. https://openreview.net/forum?id=uIj1jDc8k6
  15. Towards Measuring the Representation of Subjective Global Opinions in Language Models. arXiv:arXiv:2306.16388
  16. J.L. Fleiss et al. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin 76, 5 (1971), 378–382.
  17. Dan Friedman and Adji Bousso Dieng. 2022. The Vendi Score: A Diversity Evaluation Metric for Machine Learning. arXiv preprint arXiv:2210.02410 (2022).
  18. Make-a-scene: Scene-based text-to-image generation with human priors. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XV. Springer, 89–106.
  19. Is Your Toxicity My Toxicity? Exploring the Impact of Rater Identity on Toxicity Annotation. arXiv:2205.00501 [cs.HC]
  20. Fairness Indicators for Systematic Assessments of Visual Feature Extractors. In 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 70–88.
  21. Pinpointing Why Object Recognition Performance Degrades Across Income Levels and Geographies. arXiv:2304.05391 [cs.CV]
  22. DIG In: Evaluating Disparities in Image Generations with Indicators for Geographic Diversity. arXiv:2308.06198 [cs.CV]
  23. Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718 (2021).
  24. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. arXiv:1706.08500 [cs.LG]
  25. GANs Trained by a Two Time-Scale Update Rule Converge to a Nash Equilibrium. CoRR abs/1706.08500 (2017). arXiv:1706.08500 http://arxiv.org/abs/1706.08500
  26. Evaluation Gaps in Machine Learning Practice. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 1859–1876. https://doi.org/10.1145/3531146.3533233
  27. SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage Leveraging Generative Models. arXiv:arXiv:2305.11840
  28. The Role of ImageNet Classes in Fréchet Inception Distance. arXiv:2203.06026 [cs.CV]
  29. Improved Precision and Recall Metric for Assessing Generative Models. arXiv:1904.06991 [stat.ML]
  30. Hate Speech Classifiers are Culturally Insensitive. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), Sunipa Dev, Vinodkumar Prabhakaran, David Adelani, Dirk Hovy, and Luciana Benotti (Eds.). Association for Computational Linguistics, Dubrovnik, Croatia, 35–46. https://doi.org/10.18653/v1/2023.c3nlp-1.5
  31. Holistic Evaluation of Text-To-Image Models. arXiv:2311.04287 [cs.CV]
  32. On Self-Supervised Image Representations for {GAN} Evaluation. In International Conference on Learning Representations. https://openreview.net/forum?id=NeRdBeTionN
  33. WA Munson and Mark B Gardner. 1950. Standardizing auditory tests. The Journal of the Acoustical Society of America 22, 5_Supplement (1950), 675–675.
  34. Reliable Fidelity and Diversity Metrics for Generative Models. CoRR abs/2002.09797 (2020). arXiv:2002.09797 https://arxiv.org/abs/2002.09797
  35. GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. CoRR abs/2112.10741 (2021). arXiv:2112.10741 https://arxiv.org/abs/2112.10741
  36. DINOv2: Learning Robust Visual Features without Supervision. arXiv:2304.07193 [cs.CV]
  37. Ben Phillips. 2019. The Shifting Border Between Perception and Cognition. Noûs 53 (06 2019). https://doi.org/10.1111/nous.12218
  38. SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis. arXiv:2307.01952 [cs.CV]
  39. Cultural Incongruencies in Artificial Intelligence. arXiv:arXiv:2211.13069
  40. AI’s Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (<conf-loc>, <city>Chicago</city>, <state>IL</state>, <country>USA</country>, </conf-loc>) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 506–517. https://doi.org/10.1145/3593013.3594016
  41. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning. PMLR, 8748–8763.
  42. Beyond web-scraping: Crowd-sourcing a geodiverse dataset. In arXiv preprint.
  43. Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv:2204.06125 [cs.CV]
  44. Suman V. Ravuri and Oriol Vinyals. 2019. Classification Accuracy Score for Conditional Generative Models. CoRR abs/1905.10887 (2019). arXiv:1905.10887 http://arxiv.org/abs/1905.10887
  45. Does Progress On Object Recognition Benchmarks Improve Real-World Generalization? arXiv:arXiv:2307.13136
  46. High-Resolution Image Synthesis with Latent Diffusion Models. CoRR abs/2112.10752 (2021). arXiv:2112.10752 https://arxiv.org/abs/2112.10752
  47. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. arXiv:2205.11487 [cs.CV]
  48. Assessing Generative Models via Precision and Recall. In Advances in Neural Information Processing Systems (NeurIPS).
  49. Improved techniques for training gans. Advances in neural information processing systems 29 (2016).
  50. Re-Imagining Algorithmic Fairness in India and Beyond. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 315–328. https://doi.org/10.1145/3442188.3445896
  51. No classification without representation: Assessing geodiversity issues in open data sets for the developing world. arXiv preprint arXiv:1711.08536 (2017).
  52. How good is my GAN? CoRR abs/1807.09499 (2018). arXiv:1807.09499 http://arxiv.org/abs/1807.09499
  53. Revisiting Weakly Supervised Pre-Training of Visual Perception Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 804–814.
  54. Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models. arXiv:2306.04675 [cs.LG]
  55. Rethinking the Inception Architecture for Computer Vision. CoRR abs/1512.00567 (2015). arXiv:1512.00567 http://arxiv.org/abs/1512.00567
  56. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition. 586–595.
  57. HYPE: Human eYe Perceptual Evaluation of Generative Models. CoRR abs/1904.01121 (2019). arXiv:1904.01121 http://arxiv.org/abs/1904.01121
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Melissa Hall (24 papers)
  2. Samuel J. Bell (12 papers)
  3. Candace Ross (25 papers)
  4. Adina Williams (72 papers)
  5. Michal Drozdzal (45 papers)
  6. Adriana Romero Soriano (6 papers)
Citations (2)