Papers
Topics
Authors
Recent
2000 character limit reached

Concerns on Bias in Large Language Models when Creating Synthetic Personae (2405.05080v1)

Published 8 May 2024 in cs.HC and cs.AI

Abstract: This position paper explores the benefits, drawbacks, and ethical considerations of incorporating synthetic personae in HCI research, particularly focusing on the customization challenges beyond the limitations of current LLMs. These perspectives are derived from the initial results of a sub-study employing vignettes to showcase the existence of bias within black-box LLMs and explore methods for manipulating them. The study aims to establish a foundation for understanding the challenges associated with these models, emphasizing the necessity of thorough testing before utilizing them to create synthetic personae for HCI research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. Shaowen Bardzell. 2010. Feminist HCI: taking stock and outlining an agenda for design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10). Association for Computing Machinery, New York, NY, USA, 1301–1310. https://doi.org/10.1145/1753326.1753521
  2. Christine Barter and Emma Renold. 1999. The Use of Vignettes in Qualitative Research. Social Research Update 25 25 (1999). https://sru.soc.surrey.ac.uk/SRU25.html
  3. Features of Human-Centred Algorithm Design. In 2020 30th International Telecommunication Networks and Applications Conference (ITNAC). 1–6. https://doi.org/10.1109/ITNAC50341.2020.9315169 Journal Abbreviation: 2020 30th International Telecommunication Networks and Applications Conference (ITNAC).
  4. Paul Coulton and Joseph Lindley. 2019. More-Than Human Centred Design: Considering Other Things. The Design Journal 22 (May 2019), 1–19. https://doi.org/10.1080/14606925.2019.1614320
  5. Kate Crawford. 2021. Atlas of AI: power, politics, and the planetary costs of artificial intelligence. Yale University Press, New Haven. OCLC: on1111967630.
  6. DAIR.AI. 2023. Adversarial Prompting. https://www.promptingguide.ai/risks/adversarial
  7. Queer in HCI: Strengthening the Community of LGBTQIA+ Researchers and Research. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA ’21). Association for Computing Machinery, New York, NY, USA, 1–3. https://doi.org/10.1145/3411763.3450403
  8. Virginia Eubanks. 2019. Automating inequality: how high-tech tools profile, police, and punish the poor (first picador edition ed.). Picador St. Martin’s Press, New York.
  9. Christopher Frauenberger and Peter Purgathofer. 2019. Ways of thinking in informatics. Commun. ACM 62, 7 (June 2019), 58–64. https://doi.org/10.1145/3329674
  10. Helena A Haxvig. 2023. Exploring Large Language Model Interfaces Through Critical and Participatory Design. In CHItaly 2023 Proceedings of the Doctoral Consortium of the 15th Biannual Conference of the Italian SIGCHI Chapter (CHItaly 2023). Italy. https://ceur-ws.org/Vol-3481/paper4.pdf
  11. Frederike Kaltheuner. 2021. Fake AI. Meatspace Press. OCLC: 1292530708.
  12. Co-designing AI Futures: Integrating AI Ethics, Social Computing, and Design. In Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion (DIS ’19 Companion). Association for Computing Machinery, New York, NY, USA, 381–384. https://doi.org/10.1145/3301019.3320000
  13. Auditing large language models: a three-layered approach. AI and Ethics (May 2023). https://doi.org/10.1007/s43681-023-00289-2
  14. Qualitative Analysis for Human Centered AI. arXiv preprint arXiv:2112.03784 (2021).
  15. David Rozado. 2023. The Political Biases of ChatGPT. Social Sciences 12, 3 (March 2023), 148. https://doi.org/10.3390/socsci12030148 Number: 3 Publisher: Multidisciplinary Digital Publishing Institute.
  16. Helen Sampson and Idar Alfred Johannessen. 2020. Turning on the tap: the benefits of using ‘real-life’ vignettes in qualitative research interviews. Qualitative Research 20, 1 (Feb. 2020), 56–72. https://doi.org/10.1177/1468794118816618 Publisher: SAGE Publications.
  17. How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis Services. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 144:1–144:33. https://doi.org/10.1145/3359246
  18. Jesper Simonsen and Toni Robertson (Eds.). 2013. Routledge international handbook of participatory design. Routledge, London. OCLC: 818827037.
  19. Adhering, Steering, and Queering: Treatment of Gender in Natural Language Generation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376315
  20. “Kelly is a Warm Person, Joseph is a Role Model”: Gender Biases in LLM-Generated Reference Letters. In Findings of the Association for Computational Linguistics: EMNLP 2023, Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 3730–3748. https://doi.org/10.18653/v1/2023.findings-emnlp.243
  21. An LLM can Fool Itself: A Prompt-Based Adversarial Attack. https://doi.org/10.48550/arXiv.2310.13345 arXiv:2310.13345 [cs].

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.