Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 70 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 37 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 21 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach (2403.17873v1)

Published 26 Mar 2024 in cs.AI

Abstract: Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in LLMs, particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead to mismatches between designers' intentions and users' perceptions of social attributes, risking to promote emotional manipulation and dangerous behaviors, cases of epistemic injustice, and unwarranted trust. To address these issues, we propose enhancing the ST framework with a fifth 'W-question' to clarify the specific social attributions assigned to LLMs by its designers and users. This addition aims to bridge the gap between LLM capabilities and user perceptions, promoting the ethically responsible development and use of LLM-based technology.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. Exploring the potential utility of AI large language models for medical ethics: An expert panel evaluation of GPT-4. Journal of Medical Ethics (2023).
  2. Francesca Bargiela-Chiappini and Michael Haugh. 2009. Face, communication and social interaction. Equinox publishing.
  3. The value of measuring trust in AI-a socio-technical system perspective. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, Workshop on Trust and Reliance in AI-Human Teams (TRAIT).
  4. LLM-empowered chatbots for psychiatrist and patient simulation: Application and evaluation. arXiv preprint arXiv:2305.13614 (2023).
  5. Nathan Crilly. 2010. The roles that artefacts play: Technical, social and aesthetic functions. Design Studies 31, 4 (2010), 311–344.
  6. Expanding explainability: Towards social transparency in AI systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19.
  7. Faiza Farhat. 2023. ChatGPT as a complementary mental health resource: A boon or a bane. Annals of Biomedical Engineering (2023), 1–4.
  8. Andrea Ferrario and Nikola Biller-Andorno. 2024. Large language models in medical ethics: Useful but not expert. Journal of Medical Ethics (2024).
  9. Experts or authorities? The strange case of the presumed epistemic superiority of artificial intelligence systems. Available at SSRN 4561425 (2023).
  10. Andrea Ferrario and Michele Loi. 2022. How explainability contributes to trust in AI. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1457–1466.
  11. Luciano Floridi. 2023. AI as agency without intelligence: On ChatGPT, large language models, and other generative models. Philosophy & Technology 36, 1 (2023), 15.
  12. Erving Goffman. 1955. On Face-Work. Psychiatry 18, 3 (1955), 213–231.
  13. Erving Goffman. 1967. Interaction ritual: Essays in face-to-face behavior. Doubleday Anchor.
  14. Frank Herbert. 1965. Dune (Dune Chronicles, #1). Hodder & Stoughton.
  15. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 624–635.
  16. Sebastian Laacke. 2023. Bias and epistemic injustice in conversational AI. The American Journal of Bioethics 23, 5 (2023), 46–48.
  17. Inbar Levkovich and Zohar Elyoseph. 2023. Identifying depression and its determinants upon initiating treatment: ChatGPT versus primary care physicians. Family Medicine and Community Health 11, 4 (2023).
  18. Toni Robertson and Jesper Simonsen. 2012. Participatory Design: An introduction. In Routledge International Handbook of Participatory Design. Routledge, 1–17.
  19. Laura Sartori and Andreas Theodorou. 2022. A sociotechnical perspective for the future of AI: Narratives, inequalities, and human control. Ethics and Information Technology 24, 1 (2022), 4.
  20. Role play with large language models. Nature (2023), 1–6.
  21. Taxonomy of risks posed by language models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 214–229.
  22. Sentiment analysis in the era of large language models: A reality check. arXiv preprint arXiv:2305.15005 (2023).
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 2 likes.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube