Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation (2404.01030v3)

Published 1 Apr 2024 in cs.CV, cs.AI, and cs.CY
Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation

Abstract: The recent advancement of large and powerful models with Text-to-Image (T2I) generation abilities -- such as OpenAI's DALLE-3 and Google's Gemini -- enables users to generate high-quality images from textual prompts. However, it has become increasingly evident that even simple prompts could cause T2I models to exhibit conspicuous social bias in generated images. Such bias might lead to both allocational and representational harms in society, further marginalizing minority groups. Noting this problem, a large body of recent works has been dedicated to investigating different dimensions of bias in T2I systems. However, an extensive review of these studies is lacking, hindering a systematic understanding of current progress and research gaps. We present the first extensive survey on bias in T2I generative models. In this survey, we review prior studies on dimensions of bias: Gender, Skintone, and Geo-Culture. Specifically, we discuss how these works define, evaluate, and mitigate different aspects of bias. We found that: (1) while gender and skintone biases are widely studied, geo-cultural bias remains under-explored; (2) most works on gender and skintone bias investigated occupational association, while other aspects are less frequently studied; (3) almost all gender bias works overlook non-binary identities in their studies; (4) evaluation datasets and metrics are scattered, with no unified framework for measuring biases; and (5) current mitigation methods fail to resolve biases comprehensively. Based on current limitations, we point out future research directions that contribute to human-centric definitions, evaluations, and mitigation of biases. We hope to highlight the importance of studying biases in T2I systems, as well as encourage future efforts to holistically understand and tackle biases, building fair and trustworthy T2I technologies for everyone.

Survey of Bias in Text-to-Image Generation: Definition, Evaluation, and Mitigation

Introduction

The research under discussion provides a comprehensive overview of bias within Text-to-Image (T2I) generative systems, a field of paper that has rapidly gained attention with the advancement of models like OpenAI's DALLE-3 and Google's Gemini. While these models promise a vast array of applications, they also raise significant concerns about bias, aligning with broader societal issues related to gender, skintone, and geo-cultural representations. This survey is the first to extensively collate and analyze existing studies concerning bias in T2I systems, shedding light on how bias is defined, evaluated, and mitigated across different dimensions.

Bias Definitions

The paper identifies three primary dimensions of bias in T2I models:

  • Gender Bias, where extensive research has been conducted, reveals a significant inclination towards binary gender representations and stereotypes. Specific areas scrutinized include gender default generation, occupational association, and portrayal of characteristics, interests, stereotypes, and power dynamics.
  • Skintone Bias, which addresses the model's tendency to favor lighter skin tones in scenarios where skintone is unspecified. The biases extend to occupational associations and characteristics interests.
  • Geo-Cultural Bias reflects an under-representation or skewed portrayal of cultures, notably magnifying Western norms and stereotypes at the cost of global diversity.

Bias Evaluation

Evaluation Datasets

Different approaches toward dataset compilation are noted, from manually curated prompts to proposed datasets like CCUB and Holistic Evaluation of Text-to-Image Models (HEIM) benchmark. The adoption of predefined datasets such as LAION-5B and MS-COCO highlights the scattered nature of evaluation frameworks.

Evaluation Metrics

The paper discusses the prevalence of classification-based metrics, augmenting this with embedding-based metrics for a nuanced understanding of bias. While classification methods dominate the evaluation landscape, concerns around the reliability and ethical considerations of automated and human annotation processes are addressed.

Bias Mitigation

Mitigation strategies are broadly classified into model weight refinement and inference-time and data approaches. Despite various proposed methods—ranging from fine-tuning and model-based editing to prompt engineering and guided generation—the absence of an encompassing solution to biases is evident. The paper calls for further research into robust, adaptive, and community-informed mitigation strategies to cultivate fairer T2I systems.

Future Directions

The survey emphasizes the necessity for:

  • Enhanced definitions that clarify and contextualize biases,
  • Improved evaluation methods to measure biases accurately, considering human-centric perspectives,
  • Continuous development of mitigation strategies that are effective, diverse, and adaptive to evolving societal norms.

The discussion extends to ethical considerations, highlighting the importance of transparency in defining bias and the potential misuse of mitigation strategies in unjust applications.

Conclusion

This survey articulates the pressing need for an integrated approach to understanding, evaluating, and mitigating bias in T2I generative models. By categorizing existing studies and identifying gaps, it paves the way for future research aimed at developing fair, inclusive, and trustworthy T2I technologies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (114)
  1. Review on the effects of age, gender, and race demographics on automatic face recognition. The Visual Computer, 34:1617–1630, 2018.
  2. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a.
  3. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b.
  4. Hrs-bench: Holistic, reliable and scalable benchmark for text-to-image models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  20041–20053, 2023.
  5. How well can text-to-image generative models understand ethical natural language interventions? arXiv preprint arXiv:2210.15230, 2022.
  6. Peering through preferences: Unraveling feedback acquisition for aligning large language models. arXiv preprint arXiv:2308.15812, 2023.
  7. The problem with bias: Allocative versus representational harms in machine learning. In 9th Annual conference of the special interest group for computing, information and society, pp.  1. Philadelphia, PA, USA, 2017.
  8. Inspecting the geographical representativeness of images from text-to-image models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  5136–5147, 2023.
  9. Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. In FAccT’23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, 2023.
  10. Analyzing the effects of annotator gender across NLP tasks. In Gavin Abercrombie, Valerio Basile, Sara Tonelli, Verena Rieser, and Alexandra Uma (eds.), Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022, pp.  10–19, Marseille, France, June 2022. European Language Resources Association. URL https://aclanthology.org/2022.nlperspectives-1.2.
  11. Typology of risks of generative text-to-image models. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, pp.  396–410, 2023.
  12. Language (technology) is power: A critical survey of “bias” in NLP. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp.  5454–5476, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.485. URL https://aclanthology.org/2020.acl-main.485.
  13. Stereotyping norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp.  1004–1015, 2021.
  14. How far are we from solving the 2d & 3d face alignment problem?(and a dataset of 230,000 3d facial landmarks). In Proceedings of the IEEE international conference on computer vision, pp.  1021–1030, 2017.
  15. Gender shades: Intersectional accuracy disparities in commercial gender classification. In FAT, 2018. URL https://api.semanticscholar.org/CorpusID:3298854.
  16. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186, 2017.
  17. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217, 2023.
  18. Skin colour typology and suntanning pathways. International Journal of Cosmetic Science, 13, 1991. URL https://api.semanticscholar.org/CorpusID:25650931.
  19. Minigpt-v2: large language model as a unified interface for vision-language multi-task learning, 2023.
  20. Tibet: Identifying and evaluating biases in text-to-image generative models. arXiv preprint arXiv:2312.01261, 2023.
  21. Dall-eval: Probing the reasoning skills and social biases of text-to-image generation models, 2023.
  22. Kate Crawford. The trouble with bias. Keynote at NeurIPS, 2017. URL https://www.youtube.com/watch?v=fMym_BKWQzk.
  23. Tom Davenport. Cuebric:generative ai comes to hollywood. 2023. URL https://www.forbes.com/sites/tomdavenport/2023/03/13/cuebric-generative-ai-comes-to-hollywood/?sh=19b07abb174b.
  24. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  4690–4699, 2019.
  25. Retinaface: Single-shot multi-level face localisation in the wild. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  5203–5212, 2020.
  26. Mitigating stereotypical biases in text to image generative systems. arXiv preprint arXiv:2310.06904, 2023.
  27. Reinforcement learning for fine-tuning text-to-image diffusion models. Advances in Neural Information Processing Systems, 36, 2024.
  28. Federal Bureau of Prisons. BOP Statistics: Inmate Race — bop.gov. https://www.bop.gov/about/statistics/statistics_inmate_race.jsp.
  29. Towards racially unbiased skin tone estimation via scene disambiguation. In European Conference on Computer Vision, pp.  72–90. Springer, 2022.
  30. Charlie Fink. Vr film producer announces ai film. 2023. URL https://www.forbes.com/sites/charliefink/2023/03/02/vr-film-producer-announces-ai-film/?sh=553011426ab9.
  31. Diversity is not a one-way street: Pilot study on ethical interventions for racial bias in text-to-image systems. In 14th International Conference on Computational Creativity (ICCC). Waterloo, ON, Canada, 2023a.
  32. A friendly face: Do text-to-image systems rely on stereotypes when the input is under-specified? arXiv preprint arXiv:2302.07159, 2023b.
  33. Fair diffusion: Instructing text-to-image generation models on fairness, 2023.
  34. Multilingual text-to-image generation magnifies gender stereotypes and prompt engineering may not help you. arXiv e-prints, pp.  arXiv–2401, 2024.
  35. Uncurated image-text datasets: Shedding light on demographic bias. In CVPR, 2023.
  36. Alexandra Garfinkle. 90 2023. URL https://finance.yahoo.com/news/90-of-online-content-could-be-generated-by-ai-by-2025-expert-says-201023872.html.
  37. Google Responsible AI. Improving skin tone evaluation in machine learning. 2022. URL https://skintone.google/.
  38. GOP. Beat Biden — youtube.com. https://www.youtube.com/watch?v=kLMMxgtxQ1Y&t=32s, 2023.
  39. Harm amplification in text-to-image models. arXiv preprint arXiv:2402.01787, 2024.
  40. Debiasing text-to-image diffusion models, 2024.
  41. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
  42. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=nZeVKeeFYf9.
  43. Beyond the surface: A global-scale analysis of visual stereotypes in text-to-image generation. ArXiv, abs/2401.06310, 2024. URL https://api.semanticscholar.org/CorpusID:267959832.
  44. Ai art and its impact on artists. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’23, pp.  363–374, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400702310. doi: 10.1145/3600211.3604681. URL https://doi.org/10.1145/3600211.3604681.
  45. Fairface: Face attribute dataset for balanced race, gender, and age. arXiv preprint arXiv:1908.04913, 2019.
  46. Situating the social issues of image generation models in the model life cycle: a sociotechnical approach. arXiv preprint arXiv:2311.18345, 2023.
  47. De-stereotyping Text-to-image Models through Prompt Tuning. https://openreview.net/forum?id=yNyywJln2R, 2023.
  48. Pick-a-pic: An open dataset of user preferences for text-to-image generation. Advances in Neural Information Processing Systems, 36, 2024.
  49. Face recognition performance: Role of demographic information. IEEE Transactions on Information Forensics and Security, 7(6):1789–1801, 2012. doi: 10.1109/TIFS.2012.2214212.
  50. A novel approach for bias mitigation of gender classification algorithms using consistency regularization. Image and Vision Computing, 137:104793, 2023. ISSN 0262-8856. doi: https://doi.org/10.1016/j.imavis.2023.104793. URL https://www.sciencedirect.com/science/article/pii/S0262885623001671.
  51. Understanding fairness of gender classification algorithms across gender-race groups. In 2020 19th IEEE international conference on machine learning and applications (ICMLA), pp.  1028–1035. IEEE, 2020.
  52. Aligning text-to-image models using human feedback. arXiv preprint arXiv:2302.12192, 2023.
  53. Holistic evaluation of text-to-image models. Advances in Neural Information Processing Systems, 36, 2024.
  54. Fair text-to-image diffusion via fair mapping. arXiv preprint arXiv:2311.17695, 2023a.
  55. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In International conference on machine learning, pp.  12888–12900. PMLR, 2022.
  56. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models, 2023b.
  57. Word-level explanations for analyzing bias in text-to-image models, 2023.
  58. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp.  740–755. Springer, 2014.
  59. Scoft: Self-contrastive fine-tuning for equitable image generation. arXiv preprint arXiv:2401.08053, 2024.
  60. Stable bias: Analyzing societal representations in diffusion models. arXiv preprint arXiv:2303.11408, 2023.
  61. An empirical study of catastrophic forgetting in large language models during continual fine-tuning. arXiv preprint arXiv:2308.08747, 2023.
  62. Multimodal composite association score: Measuring gender bias in generative multimodal models, 2023.
  63. Harvey Mannering. Analysing gender bias in text-to-image models using object detection. arXiv preprint arXiv:2307.08025, 2023.
  64. Characterizing bias in classifiers using generative models. Advances in neural information processing systems, 32, 2019.
  65. Resolving ambiguities in text-to-image generative models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.  14367–14388, 2023.
  66. Margaret Mitchell. Ethical AI Isn’t to Blame for Google’s Gemini Debacle — time.com. https://time.com/6836153/ethical-ai-google-gemini-debacle/, 2024.
  67. Diversity and inclusion metrics in subset selection. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp.  117–123, 2020.
  68. Unmaking ai imagemaking: A methodological toolkit for critical investigation, 2023.
  69. Understanding unequal gender classification accuracy from face images, 2018.
  70. StereoSet: Measuring stereotypical bias in pretrained language models. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp.  5356–5371, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.416. URL https://aclanthology.org/2021.acl-long.416.
  71. Social biases through the text-to-image generation lens, 2023.
  72. Humans are biased. generative ai is even worse. 2023. URL https://www.bloomberg.com/graphics/2023-generative-ai-bias/.
  73. Evgeny Obedkov. How ai-assisted rpg tales of syn utilizes stable diffusion and chatgpt to create assets and dialogues. 2023. URL {https://gameworldobserver.com/2023/03/06/tales-of-syn-ai-rpg-stable-diffusion-chatgpt-game}.
  74. OpenAI. Dall·e 3 system card, Oct 2023. URL https://openai.com/research/dall-e-3-system-card.
  75. Editing implicit assumptions in text-to-image diffusion models. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pp.  7030–7038, 2023. URL https://api.semanticscholar.org/CorpusID:257505246.
  76. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
  77. “i’m fully who i am”: Towards centering transgender and non-binary voices to measure biases in open language generation. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pp.  1246–1266, 2023a.
  78. Factoring the matrix of domination: A critical review and reimagination of intersectionality in ai fairness. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’23, pp.  496–511, New York, NY, USA, 2023b. Association for Computing Machinery. ISBN 9798400702310. doi: 10.1145/3600211.3604705. URL https://doi.org/10.1145/3600211.3604705.
  79. Gaussian harmony: Attaining fairness in diffusion-based face generation models, 2023.
  80. Modeling human annotation errors to design bias-aware systems for social stream processing. In Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp.  374–377, 2019.
  81. Danny Postma. AI Modelling Agency — Deep Agency — deepagency.com. https://www.deepagency.com/.
  82. Ai’s regimes of representation: A community-centered study of text-to-image models in south asia. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’23, pp.  506–517, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400701924. doi: 10.1145/3593013.3594016. URL https://doi.org/10.1145/3593013.3594016.
  83. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 2021. URL https://api.semanticscholar.org/CorpusID:231591445.
  84. Deep generative views to mitigate gender classification bias across gender-race groups. In International Conference on Pattern Recognition, pp.  551–569. Springer, 2022.
  85. High-resolution image synthesis with latent diffusion models, 2021.
  86. Dex: Deep expectation of apparent age from a single image. In Proceedings of the IEEE international conference on computer vision workshops, pp.  10–15, 2015.
  87. A multi-dimensional study on bias in vision-language models. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Findings of the Association for Computational Linguistics: ACL 2023, pp.  6445–6455, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.403. URL https://aclanthology.org/2023.findings-acl.403.
  88. Nlpositionality: Characterizing design biases of datasets and models. arXiv preprint arXiv:2306.01943, 2023.
  89. Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In Marine Carpuat, Marie-Catherine de Marneffe, and Ivan Vladimir Meza Ruiz (eds.), Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.  5884–5906, Seattle, United States, July 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.naacl-main.431. URL https://aclanthology.org/2022.naacl-main.431.
  90. A unified framework and dataset for assessing gender bias in vision-language models, 2024.
  91. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  815–823, 2015.
  92. The bias amplification paradox in text-to-image generation, 2023.
  93. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Iryna Gurevych and Yusuke Miyao (eds.), Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.  2556–2565, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1238. URL https://aclanthology.org/P18-1238.
  94. Finetuning text-to-image diffusion models for fairness. arXiv preprint arXiv:2311.07604, 2023.
  95. Jeanette Silveira. Generic masculine words and thinking. Women’s Studies International Quarterly, 3(2):165–178, 1980. ISSN 0148-0685. doi: https://doi.org/10.1016/S0148-0685(80)92113-2. URL https://www.sciencedirect.com/science/article/pii/S0148068580921132. The voices and words of women and men.
  96. Evaluating the social impact of generative ai systems in systems and society, 2023.
  97. Exploiting cultural biases via homoglyphs in text-to-image synthesis. Journal of Artificial Intelligence Research, 78:1017–1068, 2023.
  98. Dreamsync: Aligning text-to-image generation with image understanding feedback. arXiv preprint arXiv:2311.17946, 2023.
  99. Deepface: Closing the gap to human-level performance in face verification. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp.  1701–1708, 2014. doi: 10.1109/CVPR.2014.220.
  100. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  101. Stereotypes and smut: The (mis)representation of non-cisgender identities by text-to-image models. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Findings of the Association for Computational Linguistics: ACL 2023, pp.  7919–7942, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.502. URL https://aclanthology.org/2023.findings-acl.502.
  102. Quantifying bias in text-to-image generative models, 2023.
  103. Diffusion model alignment using direct preference optimization. arXiv preprint arXiv:2311.12908, 2023.
  104. The male ceo and the female assistant: Probing gender biases in text-to-image models through paired stereotype test, 2024.
  105. T2IAT: Measuring valence and stereotypical biases in text-to-image generation. In Findings of the Association for Computational Linguistics: ACL 2023, pp.  2560–2574, Toronto, Canada, July 2023a. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.160. URL https://aclanthology.org/2023.findings-acl.160.
  106. DiffusionDB: A large-scale prompt gallery dataset for text-to-image generative models. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.  893–911, Toronto, Canada, July 2023b. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.51. URL https://aclanthology.org/2023.acl-long.51.
  107. Gender classification and bias mitigation in facial images. In Proceedings of the 12th ACM Conference on Web Science, pp.  106–114, 2020.
  108. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. arXiv preprint arXiv:2306.09341, 2023.
  109. Imagereward: Learning and evaluating human preferences for text-to-image generation. Advances in Neural Information Processing Systems, 36, 2024.
  110. Unified detoxifying and debiasing in language generation via inference-time adaptive optimization. arXiv preprint arXiv:2210.04492, 2022.
  111. Diverse diffusion: Enhancing image diversity in text-to-image generation, 2023.
  112. ITI-GEN: Inclusive text-to-image generation. In ICCV, 2023a.
  113. The unreasonable effectiveness of deep features as a perceptual metric. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.  586–595, Los Alamitos, CA, USA, jun 2018. IEEE Computer Society. doi: 10.1109/CVPR.2018.00068. URL https://doi.ieeecomputersociety.org/10.1109/CVPR.2018.00068.
  114. Auditing gender presentation differences in text-to-image models, 2023b.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yixin Wan (19 papers)
  2. Arjun Subramonian (22 papers)
  3. Anaelia Ovalle (16 papers)
  4. Zongyu Lin (15 papers)
  5. Ashima Suvarna (8 papers)
  6. Christina Chance (4 papers)
  7. Hritik Bansal (38 papers)
  8. Rebecca Pattichis (3 papers)
  9. Kai-Wei Chang (292 papers)
Citations (13)
X Twitter Logo Streamline Icon: https://streamlinehq.com