Towards Geographic Inclusion in the Evaluation of Text-to-Image Models (2405.04457v1)
Abstract: Rapid progress in text-to-image generative models coupled with their deployment for visual content creation has magnified the importance of thoroughly evaluating their performance and identifying potential biases. In pursuit of models that generate images that are realistic, diverse, visually appealing, and consistent with the given prompt, researchers and practitioners often turn to automated metrics to facilitate scalable and cost-effective performance profiling. However, commonly-used metrics often fail to account for the full diversity of human preference; often even in-depth human evaluations face challenges with subjectivity, especially as interpretations of evaluation criteria vary across regions and cultures. In this work, we conduct a large, cross-cultural study to study how much annotators in Africa, Europe, and Southeast Asia vary in their perception of geographic representation, visual appeal, and consistency in real and generated images from state-of-the art public APIs. We collect over 65,000 image annotations and 20 survey responses. We contrast human annotations with common automated metrics, finding that human preferences vary notably across geographic location and that current metrics do not fully account for this diversity. For example, annotators in different locations often disagree on whether exaggerated, stereotypical depictions of a region are considered geographically representative. In addition, the utility of automatic evaluations is dependent on assumptions about their set-up, such as the alignment of feature extractors with human perception of object similarity or the definition of "appeal" captured in reference datasets used to ground evaluations. We recommend steps for improved automatic and human evaluations.
- Epsilon kú mask: Integrating Yorùbá cultural greetings into machine translation. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP). Association for Computational Linguistics, Dubrovnik, Croatia, 1–7. https://doi.org/10.18653/v1/2023.c3nlp-1.1
- Identifying and Measuring Annotator Bias Based on Annotators’ Demographic Characteristics. In Proceedings of the Fourth Workshop on Online Abuse and Harms, Seyi Akiwowo, Bertie Vidgen, Vinodkumar Prabhakaran, and Zeerak Waseem (Eds.). Association for Computational Linguistics, Online, 184–190. https://doi.org/10.18653/v1/2020.alw-1.21
- Probing Pre-Trained Language Models for Cross-Cultural Differences in Values. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), Sunipa Dev, Vinodkumar Prabhakaran, David Adelani, Dirk Hovy, and Luciana Benotti (Eds.). Association for Computational Linguistics, Dubrovnik, Croatia, 114–130. https://doi.org/10.18653/v1/2023.c3nlp-1.12
- How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions?. In EMNLP (Short).
- Inspecting the Geographical Representativeness of Images from Text-to-Image Models. arXiv preprint arXiv:2305.11080 (2023).
- Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale. arXiv:2211.03759 [cs.CL]
- Concreteness ratings for 40 thousand generally known English word lemmas. Behavior research methods 46, 3 (2014), 904–911.
- Instance-conditioned gan. Advances in Neural Information Processing Systems 34 (2021), 27517–27529.
- DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers. (2022). arXiv:2202.04053 [cs.CV]
- Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack. arXiv:2309.15807 [cs.CV]
- Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations. CoRR abs/2110.05719 (2021). arXiv:2110.05719 https://arxiv.org/abs/2110.05719
- Does object recognition work for everyone?. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 52–59.
- ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. 248–255. https://doi.org/10.1109/CVPR.2009.5206848
- Building Socio-culturally Inclusive Stereotype Resources with Community Engagement. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. https://openreview.net/forum?id=uIj1jDc8k6
- Towards Measuring the Representation of Subjective Global Opinions in Language Models. arXiv:arXiv:2306.16388
- J.L. Fleiss et al. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin 76, 5 (1971), 378–382.
- Dan Friedman and Adji Bousso Dieng. 2022. The Vendi Score: A Diversity Evaluation Metric for Machine Learning. arXiv preprint arXiv:2210.02410 (2022).
- Make-a-scene: Scene-based text-to-image generation with human priors. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XV. Springer, 89–106.
- Is Your Toxicity My Toxicity? Exploring the Impact of Rater Identity on Toxicity Annotation. arXiv:2205.00501 [cs.HC]
- Fairness Indicators for Systematic Assessments of Visual Feature Extractors. In 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 70–88.
- Pinpointing Why Object Recognition Performance Degrades Across Income Levels and Geographies. arXiv:2304.05391 [cs.CV]
- DIG In: Evaluating Disparities in Image Generations with Indicators for Geographic Diversity. arXiv:2308.06198 [cs.CV]
- Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718 (2021).
- GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. arXiv:1706.08500 [cs.LG]
- GANs Trained by a Two Time-Scale Update Rule Converge to a Nash Equilibrium. CoRR abs/1706.08500 (2017). arXiv:1706.08500 http://arxiv.org/abs/1706.08500
- Evaluation Gaps in Machine Learning Practice. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 1859–1876. https://doi.org/10.1145/3531146.3533233
- SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage Leveraging Generative Models. arXiv:arXiv:2305.11840
- The Role of ImageNet Classes in Fréchet Inception Distance. arXiv:2203.06026 [cs.CV]
- Improved Precision and Recall Metric for Assessing Generative Models. arXiv:1904.06991 [stat.ML]
- Hate Speech Classifiers are Culturally Insensitive. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), Sunipa Dev, Vinodkumar Prabhakaran, David Adelani, Dirk Hovy, and Luciana Benotti (Eds.). Association for Computational Linguistics, Dubrovnik, Croatia, 35–46. https://doi.org/10.18653/v1/2023.c3nlp-1.5
- Holistic Evaluation of Text-To-Image Models. arXiv:2311.04287 [cs.CV]
- On Self-Supervised Image Representations for {GAN} Evaluation. In International Conference on Learning Representations. https://openreview.net/forum?id=NeRdBeTionN
- WA Munson and Mark B Gardner. 1950. Standardizing auditory tests. The Journal of the Acoustical Society of America 22, 5_Supplement (1950), 675–675.
- Reliable Fidelity and Diversity Metrics for Generative Models. CoRR abs/2002.09797 (2020). arXiv:2002.09797 https://arxiv.org/abs/2002.09797
- GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. CoRR abs/2112.10741 (2021). arXiv:2112.10741 https://arxiv.org/abs/2112.10741
- DINOv2: Learning Robust Visual Features without Supervision. arXiv:2304.07193 [cs.CV]
- Ben Phillips. 2019. The Shifting Border Between Perception and Cognition. Noûs 53 (06 2019). https://doi.org/10.1111/nous.12218
- SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis. arXiv:2307.01952 [cs.CV]
- Cultural Incongruencies in Artificial Intelligence. arXiv:arXiv:2211.13069
- AI’s Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (<conf-loc>, <city>Chicago</city>, <state>IL</state>, <country>USA</country>, </conf-loc>) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 506–517. https://doi.org/10.1145/3593013.3594016
- Learning transferable visual models from natural language supervision. In International Conference on Machine Learning. PMLR, 8748–8763.
- Beyond web-scraping: Crowd-sourcing a geodiverse dataset. In arXiv preprint.
- Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv:2204.06125 [cs.CV]
- Suman V. Ravuri and Oriol Vinyals. 2019. Classification Accuracy Score for Conditional Generative Models. CoRR abs/1905.10887 (2019). arXiv:1905.10887 http://arxiv.org/abs/1905.10887
- Does Progress On Object Recognition Benchmarks Improve Real-World Generalization? arXiv:arXiv:2307.13136
- High-Resolution Image Synthesis with Latent Diffusion Models. CoRR abs/2112.10752 (2021). arXiv:2112.10752 https://arxiv.org/abs/2112.10752
- Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. arXiv:2205.11487 [cs.CV]
- Assessing Generative Models via Precision and Recall. In Advances in Neural Information Processing Systems (NeurIPS).
- Improved techniques for training gans. Advances in neural information processing systems 29 (2016).
- Re-Imagining Algorithmic Fairness in India and Beyond. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 315–328. https://doi.org/10.1145/3442188.3445896
- No classification without representation: Assessing geodiversity issues in open data sets for the developing world. arXiv preprint arXiv:1711.08536 (2017).
- How good is my GAN? CoRR abs/1807.09499 (2018). arXiv:1807.09499 http://arxiv.org/abs/1807.09499
- Revisiting Weakly Supervised Pre-Training of Visual Perception Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 804–814.
- Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models. arXiv:2306.04675 [cs.LG]
- Rethinking the Inception Architecture for Computer Vision. CoRR abs/1512.00567 (2015). arXiv:1512.00567 http://arxiv.org/abs/1512.00567
- The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition. 586–595.
- HYPE: Human eYe Perceptual Evaluation of Generative Models. CoRR abs/1904.01121 (2019). arXiv:1904.01121 http://arxiv.org/abs/1904.01121
- Melissa Hall (24 papers)
- Samuel J. Bell (12 papers)
- Candace Ross (25 papers)
- Adina Williams (72 papers)
- Michal Drozdzal (45 papers)
- Adriana Romero Soriano (6 papers)