Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards a Non-Ideal Methodological Framework for Responsible ML (2401.11131v1)

Published 20 Jan 2024 in cs.HC

Abstract: Though ML practitioners increasingly employ various Responsible ML (RML) strategies, their methodological approach in practice is still unclear. In particular, the constraints, assumptions, and choices of practitioners with technical duties -- such as developers, engineers, and data scientists -- are often implicit, subtle, and under-scrutinized in HCI and related fields. We interviewed 22 technically oriented ML practitioners across seven domains to understand the characteristics of their methodological approaches to RML through the lens of ideal and non-ideal theorizing of fairness. We find that practitioners' methodological approaches fall along a spectrum of idealization. While they structured their approaches through ideal theorizing, such as by abstracting RML workflow from the inquiry of applicability of ML, they did not pay deliberate attention and systematically documented their non-ideal approaches, such as diagnosing imperfect conditions. We end our paper with a discussion of a new methodological approach, inspired by elements of non-ideal theory, to structure technical practitioners' RML process and facilitate collaboration with other stakeholders.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (158)
  1. Prescriptive and descriptive approaches to machine-learning transparency. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1–9.
  2. Sen Amartya. 2017. What do we want from a theory of justice? In Theories of Justice. Routledge, 27–50.
  3. Elizabeth Anderson. 2010. The imperative of integration. In The Imperative of Integration. Princeton University Press.
  4. Ariful Islam Anik and Andrea Bunt. 2021. Data-centric explanations: explaining training data of machine learning systems to promote transparency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13.
  5. Fairness Evaluation in Text Classification: Machine Learning Practitioner Perspectives of Individual and Group Fairness. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–20.
  6. Assuring the machine learning lifecycle: Desiderata, methods, and challenges. ACM Computing Surveys (CSUR) 54, 5 (2021), 1–39.
  7. Jacqui Ayling and Adriane Chapman. 2022. Putting AI ethics to work: are the tools fit for purpose? AI and Ethics 2, 3 (2022), 405–429.
  8. Fairness On The Ground: Applying Algorithmic Fairness Approaches to Production Systems. http://arxiv.org/abs/2103.06172 arXiv:2103.06172 [cs].
  9. It’s compaslicated: The messy relationship between rai datasets and algorithmic fairness benchmarks. arXiv preprint arXiv:2106.05498 (2021).
  10. Data journeys: Capturing the socio-material constitution of data objects and flows. Big Data & Society 3, 2 (2016), 2053951716654502.
  11. On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 610–623.
  12. Responsible AI by design in practice. arXiv preprint arXiv:1909.12838 (2019).
  13. Aspirations and Practice of ML Model Documentation: Moving the Needle with Nudging and Traceability. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17.
  14. Explainable machine learning in deployment. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 648–657.
  15. The values encoded in machine learning research. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 173–184.
  16. Alan Borning and Michael Muller. 2012. Next steps for value sensitive design. In Proceedings of the SIGCHI conference on human factors in computing systems. 1125–1134.
  17. Optimized pre-processing for discrimination prevention. Advances in neural information processing systems 30 (2017).
  18. Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization. http://arxiv.org/abs/2006.15766 arXiv:2006.15766 [cs, stat].
  19. Revolt: Collaborative crowdsourcing for labeling machine learning datasets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2334–2346.
  20. Beyond fairness metrics: Roadblocks and challenges for ethical ai in practice. arXiv preprint arXiv:2108.06217 (2021).
  21. Justin Cheng and Michael S Bernstein. 2015. Flock: Hybrid crowd-machine learning classifiers. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing. 600–611.
  22. Ilaria Cozzaglio and Greta Favara. 2022. Feasibility beyond non-ideal theory: A realist proposal. Ethical Theory and Moral Practice 25, 3 (2022), 417–432.
  23. Interactive model cards: A human-centered approach to model documentation. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 427–439.
  24. The social side of software platform ecosystems. In Proceedings of the 2016 CHI conference on human factors in computing systems. 3204–3214.
  25. Understanding Practices, Challenges, and Opportunities for User-Engaged Algorithm Auditing in Industry Practice. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–18.
  26. Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, 473–484. https://doi.org/10.1145/3531146.3533113
  27. Michael Denkowski and Alon Lavie. 2012. Challenges in predicting machine translation utility for human post-editors. In Proceedings of the 10th Conference of the Association for Machine Translation in the Americas: Research Papers.
  28. Deon. 2023 (Accessed 6-Feb-2023). An ethics checklist for data scientists. https://deon.drivendata.org.
  29. Advait Deshpande and Helen Sharp. 2022. Responsible AI Systems: Who are the Stakeholders?. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. 227–236.
  30. Virginia Dignum. 2017. Responsible artificial intelligence: designing AI for human values. (2017).
  31. UX design innovation: Challenges for working with machine learning as a design material. In Proceedings of the 2017 chi conference on human factors in computing systems. 278–288.
  32. Towards Responsible Data Practices in Digital Health: A case study of an open source community’s journey. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–8.
  33. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19.
  34. Responsible & Inclusive Cards: An Online Card Tool to Promote Critical Reflection in Technology Industry Work Practices. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–14.
  35. Eva Erman and Niklas Möller. 2022. Is Ideal Theory Useless for Nonideal Theory? The Journal of Politics 84, 1 (2022), 525–540.
  36. Colin Farrelly. 2007. Justice in ideal theory: A refutation. Political studies 55, 4 (2007), 844–864.
  37. Greta Favara. 2023. Political realism and the relationship between ideal and non-ideal theory. Critical Review of International Social and Political Philosophy 26, 3 (2023), 376–397.
  38. Sina Fazelpour and Zachary C Lipton. 2020. Algorithmic fairness from a non-ideal perspective. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 57–63.
  39. Melanie Feinberg. 2017. A design perspective on data. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2952–2963.
  40. Missing the missing values: The ugly duckling of fairness in machine learning. International Journal of Intelligent Systems 36, 7 (2021), 3217–3258.
  41. Efficient and robust automated machine learning. Advances in neural information processing systems 28 (2015).
  42. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication 2020-1 (2020).
  43. Batya Friedman. 1996. Value-sensitive design. interactions 3, 6 (1996), 16–23.
  44. William A Galston. 2010. Realism in political theory. European journal of political theory 9, 4 (2010), 385–411.
  45. Datasheets for Datasets. http://arxiv.org/abs/1803.09010 arXiv:1803.09010 [cs].
  46. Robert E Goodin. 2003. Reflective democracy. OUP Oxford.
  47. Hilkje C Hänel and Johanna M Müller. 2022. Non-Ideal Philosophy as Methodology. (2022).
  48. Towards a Human-Centred Artificial Intelligence Maturity Model. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–7.
  49. Sally Haslanger. 2012. Resisting reality: Social construction and social critique. Oxford University Press.
  50. Sally Haslanger. 2021. Methods of Social Critique. In Crisis and Critique: Philosophical Analysis and Current Events: Proceedings of the 42nd International Ludwig Wittgenstein Symposium, Vol. 28. Walter de Gruyter GmbH & Co KG, 139.
  51. Understanding Machine Learning Practitioners’ Data Documentation Perceptions, Needs, Challenges, and Desiderata. Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022), 1–29.
  52. Experiences with improving the transparency of AI models and services. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–8.
  53. Understanding and visualizing data iteration in machine learning. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–13.
  54. Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–16.
  55. Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs. 4, CSCW1, Article 68 (may 2020), 26 pages. https://doi.org/10.1145/3392878
  56. Human factors in model interpretability: Industry practices, challenges, and needs. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1 (2020), 1–26.
  57. Javier Camacho Ibáñez and Mónica Villas Olmeda. 2022. Operationalising AI ethics: how are companies bridging the gap between practice and principles? An exploratory study. AI & SOCIETY 37, 4 (2022), 1663–1687.
  58. How different groups prioritize ethical values for responsible AI. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 310–323.
  59. The principles of data-centric ai (dcai). arXiv preprint arXiv:2211.14611 (2022).
  60. The global landscape of AI ethics guidelines. Nature machine intelligence 1, 9 (2019), 389–399.
  61. Faisal Kamiran and Toon Calders. 2012. Data preprocessing techniques for classification without discrimination. Knowledge and information systems 33, 1 (2012), 1–33.
  62. A hunt for the Snark: Annotator Diversity in Data Practices. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–15.
  63. Towards effective foraging by data scientists to find past analysis choices. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.
  64. The story in the notebook: Exploratory data science using a literate programming tool. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–11.
  65. Bottom-Up organizing with tools from on high: Understanding the data practices of labor organizers. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
  66. ” Help Me Help the AI”: Understanding How Explainability Can Support Human-AI Interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17.
  67. Collaborative Practices with Structured Data: Do Tools Support What Users Need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300330
  68. Mapping out human-centered data science: Methods, approaches, and best practices. In Companion Proceedings of the 2020 ACM International Conference on Supporting Group Work. 151–156.
  69. Towards unifying feature attribution and counterfactual explanations: Different means to the same end. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 652–663.
  70. The AI ethics maturity model: a holistic approach to advancing ethical data science in organizations. AI and Ethics 3, 2 (2023), 355–367.
  71. Sean Kross and Philip J Guo. 2019. Practitioners teaching data science in industry and academia: Expectations, workflows, and challenges. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–14.
  72. Predicting Treatment Adherence of Tuberculosis Patients at Scale. In Machine Learning for Health. PMLR, 35–61.
  73. Four years of FAccT: A reflexive, mixed-methods analysis of research contributions, shortcomings, and future prospects. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 401–426.
  74. Michelle Seng Ah Lee and Jat Singh. 2021a. The Landscape and Gaps in Open Source Fairness Toolkits. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–13. https://doi.org/10.1145/3411764.3445261
  75. Michelle Seng Ah Lee and Jat Singh. 2021b. The landscape and gaps in open source fairness toolkits. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–13.
  76. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022).
  77. Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–15.
  78. Q Vera Liao and Michael Muller. 2019. Enabling value sensitive AI systems through participatory design fictions. arXiv preprint arXiv:1912.07381 (2019).
  79. Designerly understanding: Information needs for model transparency to support design ideation for AI-powered user experience. In Proceedings of the 2023 CHI conference on human factors in computing systems. 1–21.
  80. Q Vera Liao and Kush R Varshney. 2021. Human-centered explainable ai (xai): From algorithms to user experiences. arXiv preprint arXiv:2110.10790 (2021).
  81. Responsible AI pattern catalogue: A multivocal literature review. arXiv preprint arXiv:2209.04963 (2022).
  82. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).
  83. Alan Lundgard. 2020. Measuring justice in machine learning. arXiv preprint arXiv:2009.10050 (2020).
  84. Assessing the Fairness of AI Systems: AI Practitioners’ Processes, Challenges, and Needs for Support. Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (2022), 1–26.
  85. Assessing the Fairness of AI Systems: AI Practitioners’ Processes, Challenges, and Needs for Support. Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (March 2022), 1–26. https://doi.org/10.1145/3512899
  86. Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–14.
  87. Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–14. https://doi.org/10.1145/3313831.3376445
  88. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 220–229. https://doi.org/10.1145/3287560.3287596 arXiv:1810.03993 [cs].
  89. Prediction-based decisions and fairness: A catalogue of choices, assumptions, and definitions. arXiv preprint arXiv:1811.07867 (2018).
  90. Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application 8 (2021), 141–163.
  91. The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default. arXiv preprint arXiv:2302.02404 (2023).
  92. Development of a COVID-19–Related Anti-Asian Tweet Data Set: Quantitative Study. JMIR Formative Research 7 (2023), e40403.
  93. Interrogating Data Science. In Conference Companion Publication of the 2020 on Computer Supported Cooperative Work and Social Computing. 467–473.
  94. Human-centered study of data science work practices. In Extended abstracts of the 2019 CHI conference on human factors in computing systems. 1–8.
  95. How data science workers work with data. In Conference on Human Factors in Computing Systems-Proceedings. 86–94.
  96. Michael Muller and Angelika Strohmayer. 2022. Forgetting practices in the data sciences. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–19.
  97. Designing ground truth and the social life of labels. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–16.
  98. Critique and contribute: A practice-based framework for improving critical data studies and data science. Big data 5, 2 (2017), 85–97.
  99. Richard North. 2010. Political realism: introduction. European Journal of Political Theory 9, 4 (2010), 381–384.
  100. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447–453.
  101. Ilse Oosterlaken. 2012. The capability approach, technology and design: Taking stock and looking ahead. Springer.
  102. Augmented Datasheets for Speech Datasets and Ethical Decision-Making. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 881–904.
  103. Samir Passi and Solon Barocas. 2019. Problem Formulation and Fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, Atlanta GA USA, 39–48. https://doi.org/10.1145/3287560.3287567
  104. Samir Passi and Steven Jackson. 2017. Data vision: Learning to see through algorithmic abstraction. In Proceedings of the 2017 ACM conference on computer supported cooperative work and social computing. 2436–2447.
  105. Samir Passi and Steven J. Jackson. 2018. Trust in Data Science: Collaboration, Translation, and Accountability in Corporate Data Science Projects. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (Nov. 2018), 1–28. https://doi.org/10.1145/3274405
  106. Kathleen H Pine and Max Liboiron. 2015. The politics of measurement and action. In Proceedings of the 33rd annual ACM conference on human factors in computing systems. 3147–3156.
  107. Broken data: Conceptualising data in an emerging world. Big Data & Society 5, 1 (2018), 2053951717753228.
  108. Stronger Together: on the Articulation of Ethical Charters, Legal Tools, and Technical Documentation in ML. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 343–354.
  109. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 33–44.
  110. Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (April 2021), 1–23. https://doi.org/10.1145/3449081
  111. Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–23.
  112. John Rawls. 2020. A theory of justice: Revised edition. Harvard university press.
  113. Beyond accuracy: Behavioral testing of NLP models with CheckList. arXiv preprint arXiv:2005.04118 (2020).
  114. Towards Fairness in Practice: A Practitioner-Oriented Rubric for Evaluating Fair ML Toolkits. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–13. https://doi.org/10.1145/3411764.3445604
  115. From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–18.
  116. Angler: Helping Machine Translation Practitioners Prioritize Model Improvements. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–20.
  117. HateCheck: Functional tests for hate speech detection models. arXiv preprint arXiv:2012.15606 (2020).
  118. Boris Ruf and Marcin Detyniecki. 2022. A Tool Bundle for AI Fairness in Practice. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1–3.
  119. “Everyone wants to do the model work, not the data work”: Data Cascades in High-Stakes AI. In proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.
  120. Advait Sarkar. 2023. Enough With “Human-AI Collaboration”. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–8.
  121. Marshall Sashkin and Richard L Williams. 1990. Does fairness make a difference? Organizational Dynamics 19, 2 (1990), 56–71.
  122. Rethinking ”Risk” in Algorithmic Systems Through A Computational Narrative Analysis of Casenotes in Child-Welfare. (2023).
  123. Jana Schaich Borg. 2021. Four investment areas for ethical AI: Transdisciplinary opportunities to close the publication-to-practice gap. Big Data & Society 8, 2 (2021), 20539517211040197.
  124. David Schmidtz. 2011. Nonideal theory: What it is and what it needs to be. Ethics 121, 4 (2011), 772–796.
  125. Amartya Sen. 2008. The idea of justice. Journal of human development 9, 3 (2008), 331–342.
  126. Gaganpreet Sharma. 2017. Pros and cons of different sampling techniques. International journal of applied research 3, 7 (2017), 749–752.
  127. A John Simmons. 2010. Ideal and nonideal theory. Philosophy & public affairs 38, 1 (2010), 5–36.
  128. Matt Sleat. 2012. Legitimacy in a Non-Ideal Key: A Critical Response to Andrew Mason. Political Theory 40, 5 (2012), 650–656.
  129. Organisational responses to the ethical issues of artificial intelligence. AI & SOCIETY 37, 1 (2022), 23–37.
  130. Zofia Stemplowska and Adam Swift. 2012. Ideal and nonideal theory. The Oxford handbook of political philosophy (2012), 373–389.
  131. Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243 (2019).
  132. Fairness-aware class imbalanced learning. arXiv preprint arXiv:2109.10444 (2021).
  133. Solving separation-of-concerns problems in collaborative design of human-AI systems through leaky abstractions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–21.
  134. Adam Swift. 2008. The value of philosophy in nonideal circumstances. Social Theory and Practice 34, 3 (2008), 363–387.
  135. Data-in-place: Thinking through the relations between data and community. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 2863–2872.
  136. Stefan Timmermans and Iddo Tavory. 2012. Theory construction in qualitative research: From grounded theory to abductive analysis. Sociological theory 30, 3 (2012), 167–186.
  137. Laura Valentini. 2012. Ideal vs. non-ideal theory: A conceptual map. Philosophy compass 7, 9 (2012), 654–664.
  138. Rama Adithya Varanasi and Nitesh Goyal. 2023. “It is currently hodgepodge”: Examining AI/ML Practitioners’ Challenges during Co-production of Responsible AI Values. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17.
  139. Justice and fairness. Issues in Ethics 3, 2 (1990), 1–3.
  140. How data scientists use computational notebooks for real-time collaboration. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–30.
  141. Autods: Towards human-centered automation of data science. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–12.
  142. How much automation does a data scientist want? arXiv preprint arXiv:2101.03970 (2021).
  143. Whose AI Dream? In search of the aspiration in data annotation.. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–16.
  144. Human-ai collaboration in data science: Exploring data scientists’ perceptions of automated ai. Proceedings of the ACM on human-computer interaction 3, CSCW (2019), 1–24.
  145. Designing Responsible AI: Adaptations of UX Practice to Meet Responsible AI Challenges. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–16.
  146. Lindsay Weinberg. 2022. Rethinking fairness: an interdisciplinary survey of critiques of hegemonic ML fairness approaches. Journal of Artificial Intelligence Research 74 (2022), 75–109.
  147. David Wiens. 2015a. Against ideal guidance. The Journal of Politics 77, 2 (2015), 433–446.
  148. David Wiens. 2015b. Political ideals and the feasibility frontier. Economics & Philosophy 31, 3 (2015), 447–477.
  149. Seeing like a toolkit: How toolkits envision the work of AI ethics. Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (2023), 1–27.
  150. Polyjuice: Generating counterfactuals for explaining, evaluating, and improving models. arXiv preprint arXiv:2101.00288 (2021).
  151. Whither automl? understanding the role of automation in machine learning workflows. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
  152. Harnessing biomedical literature to calibrate clinicians’ trust in AI decision support systems. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–14.
  153. Investigating how experienced UX designers effectively work with machine learning. In Proceedings of the 2018 designing interactive systems conference. 585–596.
  154. Grounding interactive machine learning tool design in how non-experts actually build models. In Proceedings of the 2018 designing interactive systems conference. 573–584.
  155. Investigating How Practitioners Use Human-AI Guidelines: A Case Study on the People+ AI Guidebook. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–13.
  156. Contextualizing User Perceptions about Biases for Human-Centered Explainable Artificial Intelligence. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–15.
  157. Sabah Zdanowska and Alex S Taylor. 2022. A study of UX practitioners roles in designing real-world, enterprise ML systems. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–15.
  158. How do data science workers collaborate? roles, workflows, and tools. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1 (2020), 1–23.
Citations (4)

Summary

We haven't generated a summary for this paper yet.