Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

HCC Is All You Need: Alignment-The Sensible Kind Anyway-Is Just Human-Centered Computing (2405.03699v1)

Published 30 Apr 2024 in cs.HC and cs.AI

Abstract: This article argues that AI Alignment is a type of Human-Centered Computing.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Mark S Ackerman. 2000. The intellectual challenge of CSCW: The gap between social requirements and technical feasibility. Human–Computer Interaction 15, 2-3 (2000), 179–203.
  2. Field-building and the epistemic culture of AI safety. First Monday 29, 4 (Apr. 2024). https://doi.org/10.5210/fm.v29i4.13626
  3. Shaowen Bardzell. 2010. Feminist HCI: taking stock and outlining an agenda for design. In Proceedings of the SIGCHI conference on human factors in computing systems. 1301–1310.
  4. Susanne Bødker. 2006. When second wave HCI meets third wave challenges. In Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles. 1–8.
  5. Stuart K Card. 1981. The psychology of human-computer interaction. Crc Press.
  6. The design space of input devices. In Proceedings of the SIGCHI conference on Human factors in computing systems. 117–124.
  7. Domestic violence and information communication technologies. Interacting with computers 23, 5 (2011), 413–421.
  8. Paul Dourish. 2006. Implications for design. In Proceedings of the SIGCHI conference on Human Factors in computing systems. 541–550.
  9. The benefits of Facebook “friends:” Social capital and college students’ use of online social network sites. Journal of computer-mediated communication 12, 4 (2007), 1143–1168.
  10. An intersectional approach to designing in the margins. interactions 25, 3 (2018), 66–69.
  11. "I always assumed that I wasn’t really that close to [her]" Reasoning about Invisible Algorithms in News Feeds. In Proceedings of the 33rd annual ACM conference on human factors in computing systems. 153–162.
  12. National Science Foundation. 2024. IIS: Human-Centered Computing (HCC). https://new.nsf.gov/funding/opportunities/iis-human-centered-computing-hcc. Accessed April 30, 2024.
  13. Batya Friedman and David G Hendry. 2019. Value sensitive design: Shaping technology with moral imagination. MIT Press.
  14. Batya Friedman and Helen Nissenbaum. 1996. Bias in computer systems. ACM Transactions on information systems (TOIS) 14, 3 (1996), 330–347.
  15. Ben Gansky and Sean McDonald. 2022. CounterFAccTual: How FAccT undermines its organizing principles. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1982–1992.
  16. Design: cultural probes. Interactions 6, 1 (1999), 21–29.
  17. Timnit Gebru and Émile P. Torres. 2024. The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday 29, 4 (Apr. 2024). https://doi.org/10.5210/fm.v29i4.13636
  18. Ben Green. 2021. The contestation of tech ethics: A sociotechnical approach to technology ethics in practice. Journal of Social Computing 2, 3 (2021), 209–225.
  19. Designing trans technology: Defining challenges and envisioning community-centered solutions. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
  20. Christina Harrington and Tawanna R Dillahunt. 2021. Eliciting tech futures among Black young adults: A case study of remote speculative co-design. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.
  21. Gillian R Hayes. 2011. The relationship of action research to human-computer interaction. ACM Transactions on Computer-Human Interaction (TOCHI) 18, 3 (2011), 1–20.
  22. Yes: Affirmative consent as a theoretical framework for understanding and imagining social platforms. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–18.
  23. AI alignment: A comprehensive survey. arXiv preprint arXiv:2310.19852 (2023).
  24. Sensible AI: Re-imagining interpretability and explainability using sensemaking theory. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 702–714.
  25. Finn Kensing and Jeanette Blomberg. 1998. Participatory design: Issues and concerns. Computer supported cooperative work (CSCW) 7 (1998), 167–185.
  26. Psychological research online: report of Board of Scientific Affairs’ Advisory Group on the Conduct of Research on the Internet. American psychologist 59, 2 (2004), 105.
  27. Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising. Proceedings of the ACM on Human-Computer Interaction 7, CSCW2 (2023), 1–37.
  28. Scalable agent alignment via reward modeling: a research direction. arXiv preprint arXiv:1811.07871 (2018).
  29. Embracing four tensions in human-computer interaction research with marginalized people. ACM Transactions on Computer-Human Interaction (TOCHI) 28, 2 (2021), 1–47.
  30. Zachary C Lipton. 2018. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16, 3 (2018), 31–57.
  31. Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–14.
  32. Beyond Active-Passive: Towards the Next Stage of Social Media and Mental Health Research. (2024).
  33. The design space of generative models. arXiv preprint arXiv:2304.10547 (2023).
  34. Michael J Muller and Sarah Kuhn. 1993. Participatory design. Commun. ACM 36, 6 (1993), 24–28.
  35. Gina Neff. 2020. From bad users and failed uses to responsible technologies: A call to expand the AI ethics toolkit. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 5–6.
  36. Critical race theory for HCI. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–16.
  37. Judith S Olson and Wendy A Kellogg. 2014. Ways of Knowing in HCI. Vol. 2. Springer.
  38. John Pruitt and Jonathan Grudin. 2003. Personas: practice and theory. In Proceedings of the 2003 conference on Designing for user experiences. 1–15.
  39. Lucille Alice Suchman. 1987. Plans and situated actions: The problem of human-machine communication. Cambridge university press.
  40. Design within a patriarchal society: Opportunities and challenges in designing for rural women in bangladesh. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–13.
  41. AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support. arXiv preprint arXiv:2311.00710 (2023).
  42. John C. Thomas and Wendy A. Kellogg. 1989. Minimizing ecological gaps in interface design. IEEE Software 6, 1 (1989), 78–86.
  43. Confronting power and corporate capture at the FAccT Conference. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1375–1386.
Citations (2)

Summary

  • The paper redefines AI alignment by integrating Human-Centered Computing principles to promote user-centric and ethical AI systems.
  • It implements participatory design and iterative development methods to address evolving human needs and overcome static solutions.
  • The study encourages interdisciplinary collaboration and continuous strategy revisions to drive innovation in AI research.

Rethinking AI Alignment as Human-Centered Computing

Understanding AI Alignment through the Lens of Human-Centered Computing

Human-Centered Computing (HCC) has been a part of the academic landscape for over four decades, tackling complex interactions between humans and technology. The paper in discussion aims to bridge the concept of AI alignment, which concerns ensuring AI systems adhere to human intentions and values, with established practices and insights from HCC.

AI alignment has been variously interpreted, touching on areas from goal specification in Artificial General Intelligence to concerns about ethical and societal impacts. This integration with HCC is posited not as a novelty but as a reframing; it allows the field to benefit from an existing rich body of research without needing to reinvent foundational concepts under new terminologies.

Key Challenges in AI Alignment

The process of aligning AI with human needs and values is multi-faceted, involving various technical and ethical challenges. Here are a few themes the paper touches upon:

  • Identifying Stakeholder Needs: Who benefits from AI, and what do they require from these systems? This question underpins much of the alignment debate and mirrors concerns in HCC about user-centric design.
  • Dynamic Nature of Human Desires: What people want from technology can change, making static solutions inadequate. AI systems need to adapt to evolving human intentions, a problem well-documented within HCC frameworks.
  • Participatory Design: Incorporating diverse user inputs in AI development reflects established HCC methodologies, advocating for systems that are designed with, rather than for, users.
  • Evaluating System Impact: Understanding the broader impact of AI on individuals and society, including unanticipated negative consequences, aligns closely with similar assessments in technology design explored within HCC.

Practical and Theoretical Implications

Merging AI alignment with HCC not only offers a consolidated theoretical framework but also practical methodologies that have been honed over years. AI researchers can draw upon established HCC tools like participatory design, iterative development, and continuous user feedback to refine AI systems. This viewpoint advocates for leveraging existing knowledge rather than developing new theories from scratch, potentially accelerating the development of more humane and responsive AI technologies.

Contributions from Broader Disciplines: The discourse invites contributions from fields such as Science and Technology Studies (STS), Communication, and Ethics, enriching the AI alignment discussions with diverse perspectives and methodological rigor.

Future Directions in AI Development

Looking ahead, framing AI alignment within the HCC domain suggests several future research avenues:

  • Integration of Interdisciplinary Methods: Incorporating methods from social sciences and humanities could provide new insights into user-centric AI development.
  • Focus on Ethical Design: Emphasizing the ethical dimensions of HCC can guide AI designers towards more equitable and just technologies.
  • Continuous Revision of Alignment Strategies: As societal values and norms evolve, so too must the strategies for aligning AI with human intentions. This dynamic approach can benefit from HCC's focus on iterative and inclusive design processes.

By considering AI alignment as a branch of Human-Centered Computing, we can utilize a robust framework that has been continuously refined through decades of research. This approach might not only simplify some of the complexities associated with AI but also ensure that the technologies we develop genuinely reflect and respect human values and needs.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

HackerNews

  1. HCC Is All You Need (2 points, 1 comment)