Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beyond Predictive Algorithms in Child Welfare (2403.05573v1)

Published 26 Feb 2024 in cs.CY, cs.HC, and cs.LG

Abstract: Caseworkers in the child welfare (CW) sector use predictive decision-making algorithms built on risk assessment (RA) data to guide and support CW decisions. Researchers have highlighted that RAs can contain biased signals which flatten CW case complexities and that the algorithms may benefit from incorporating contextually rich case narratives, i.e. - casenotes written by caseworkers. To investigate this hypothesized improvement, we quantitatively deconstructed two commonly used RAs from a United States CW agency. We trained classifier models to compare the predictive validity of RAs with and without casenote narratives and applied computational text analysis on casenotes to highlight topics uncovered in the casenotes. Our study finds that common risk metrics used to assess families and build CWS predictive risk models (PRMs) are unable to predict discharge outcomes for children who are not reunified with their birth parent(s). We also find that although casenotes cannot predict discharge outcomes, they contain contextual case signals. Given the lack of predictive validity of RA scores and casenotes, we propose moving beyond quantitative risk assessments for public sector algorithms and towards using contextual sources of information such as narratives to study public sociotechnical systems.

Beyond Predictive Algorithms in Child Welfare: A Quantitative Critique

The paper "Beyond Predictive Algorithms in Child Welfare" presents a critical examination of the role of risk assessments in algorithmic decision-making within the child welfare system (CWS), highlighting the limitations of current predictive models. The authors focus on the United States child welfare sector's reliance on predictive decision-making algorithms, which utilize risk assessments (RAs) as inputs for predictive risk models (PRMs). These PRMs are formulated to identify high-risk cases of child maltreatment, theoretically allowing caseworkers to make more objective and efficient decisions. However, the paper argues for a fundamental reevaluation of these models, emphasizing the value of narrative data in understanding the broader context of child welfare cases.

Key Findings and Methodology

The research investigates whether the integration of case narrative data, specifically caseworkers' casenotes, into PRMs can enhance the models' predictive validity. The paper quantitatively deconstructs two prevalent RAs—AAPI and NCFAS—from a United States CWS agency, comparing the predictive efficacy of models using RA data with those incorporating contextual narratives. The authors employ a combination of support vector machines (SVM) and random forest models as classification tools, utilizing topic modeling to extract thematic insights from narrative data.

Significant findings from the paper include:

  1. Inadequacy of Risk Assessments: The paper found that RAs used in building PRMs are ineffective in predicting non-reunification ('NR') outcomes for children, with models displaying high false positive rates. For instance, classifiers built with only RA data failed to accurately identify 'NR' cases, often predicting erroneously these as reunification ('R') outcomes.
  2. Value of Narrative Data: Models that incorporated casenotes demonstrated better specificity in identifying 'NR' outcomes compared to RA-only models. Although these models still exhibited limitations, the narrative data provided valuable contextual signals missing from RA scores.
  3. Contextual Insights from Casenotes: Through computational text analysis, the paper uncovers that case narratives encapsulate complex, multi-faceted interactions not captured in RA data. These narratives reflect the discretionary work and contextual factors affecting case outcomes, emphasizing the need for a shift from purely quantitative models to hybridized approaches that acknowledge qualitative details.

Implications and Future Directions

The paper critiques the foundational constructs of PRMs in the child welfare sector, suggesting that reliance on RA data for predictive purposes is fundamentally flawed. This critique extends beyond child welfare, posing broader questions about the use of predictive algorithms in public sector decision-making. Highlighting the limitations of current algorithmic approaches, the research advocates for the integration of case narratives to capture the rich, contextual fabric of public sector systems.

The findings support a movement towards contextually aware, human-centered approaches in the design of decision-making algorithms. This shift could mitigate biases inherent in RA data, such as those related to poverty and race, and more accurately reflect the lived realities of those involved in the CWS. The use of narrative data may also provide a more dynamic assessment of risk, accounting for the socio-technical complexities involved in family preservation decisions.

Future developments could focus on refining methods for integrating narrative data into PRMs, exploring novel computational techniques that leverage the nuanced insights provided by casenotes while preserving the interpretability and fairness of predictive models. Additionally, further exploration is required in understanding how algorithmic systems can be designed to support, rather than supplant, the critical judgment of human caseworkers in the child welfare domain.

Conclusion

Overall, this paper provides a compelling argument for reevaluating the use of predictive algorithms in the child welfare system. By critically examining existing models and proposing a more holistic integration of narrative data, the researchers contribute to an evolving discourse on the intersection of algorithmic governance and human-centered decision-making in public sector systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (114)
  1. Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 252–260, 2020.
  2. Algorithmic profiling of job seekers in Austria: how austerity politics are made effective. Front. Big Data 3: 5. doi: 10.3389/fdata, 2020.
  3. Street-level algorithms and ai in bureaucratic decision-making: A caseworker perspective. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1):1–23, 2021.
  4. Machine Bias, May 2016.
  5. Narrative paths and negotiation of power in birth stories. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW):1–27, 2019.
  6. Human-Centered Data Science: An Introduction. MIT Press, 2022.
  7. Augintel. About us: Social impact meets A.I., 2020.
  8. Augintel. Allegheny county DHS case study: Unlocking the data in case notes with natural language processing, may 2022.
  9. S. Barocas and A. D. Selbst. Big data’s disparate impact. Calif. L. Rev., 104:671, 2016.
  10. Comparing grounded theory and topic modeling: Extreme divergence or unlikely convergence? Journal of the Association for Information Science and Technology, 68(6):1397–1410, 2017.
  11. AAPI onine development handbook the adult-adolescent parenting inventory (AAPI-2). Family Development Resources, Inc, 2010.
  12. Family members’ perspectives of child protection services, a metasynthesis of the literature. Children and Youth Services Review, p. 106094, 2021.
  13. D. M. Blei. Probabilistic topic models. Commun. ACM, 55(4):77–84, apr 2012. doi: 10 . 1145/2133806 . 2133826
  14. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, Mar. 2003.
  15. E. A. Bosk. What counts? Quantification, worker judgment, and divergence in child welfare decision making. Human Service Organizations: Management, Leadership & Governance, 42(2):205–224, 2018.
  16. V. Braun and V. Clarke. Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2):77–101, 2006. doi: 10 . 1191/1478088706qp063oa
  17. L. Breiman. Random forests. Machine Learning, 45(1):5–32, Oct 2001. doi: 10 . 1023/A:1010933404324
  18. Toward algorithmic accountability in public services: A qualitative study of affected community perspectives on algorithmic decision-making in child welfare services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 41. ACM, 2019.
  19. J. Burrell and M. Fourcade. The society of algorithms. Annual Review of Sociology, 47(1):213–237, 2021. doi: 10 . 1146/annurev-soc-090820-020800
  20. Chapter 8 - precision medicine in digital pathology via image analysis and machine learning. In S. Cohen, ed., Artificial Intelligence and Deep Learning in Pathology, pp. 149–173. Elsevier, 2021. doi: 10 . 1016/B978-0-323-67538-3 . 00008-7
  21. M. J. Camasso and R. Jagannathan. Decision making in child protective services: A risky business? Risk analysis, 33(9):1636–1649, 2013.
  22. Capacity Building Center for States. Child Protective Services: A guide for caseworkers, 2018.
  23. Who is the ”human” in human-centered machine learning: The case of predicting mental health from social media. Proc. ACM Hum.-Comput. Interact., 3(CSCW), nov 2019. doi: 10 . 1145/3359249
  24. SMOTE: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16:321–357, jun 2002. doi: 10 . 1613/jair . 953
  25. How child welfare workers reduce racial disparities in algorithmic decisions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22. Association for Computing Machinery, New York, NY, USA, 2022. doi: 10 . 1145/3491102 . 3501831
  26. Child Welfare Information Gateway. Family Preservation Services.
  27. Child Welfare Information Gateway. What is child abuse and neglect? Recognizing the signs and symptoms, 2019.
  28. Child Welfare Information Gateway. Concurrent planning for timely permanency for children, 2021.
  29. Child Welfare Information Gateway. The use of safety and risk assessments in child protection cases, 2022.
  30. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In Conference on Fairness, Accountability and Transparency, pp. 134–148, 2018.
  31. A. Christin. Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society, 4(2):2053951717718855, 2017. doi: 10 . 1177/2053951717718855
  32. Reconciling data-driven crime analysis with human-centered algorithms. Cities, 124:103604, 2022. doi: 10 . 1016/j . cities . 2022 . 103604
  33. O. H. R. Commission. Under suspicion: Concerns about child welfare, 2017.
  34. The role of administrative data in the big data revolution in social science research. Social science research, 59:1–12, 2016.
  35. Measuring the potential for child maltreatment: The reliability and validity of the Adult Adolescent Parenting Inventory—2. Child Abuse & Neglect, 30(1):39–53, 2006. doi: 10 . 1016/j . chiabu . 2005 . 08 . 011
  36. V. A. Copeland. “It’s the Only System We’ve Got”: Exploring emergency response decision-making in child welfare. Columbia Journal of Race and Law, 11(3):43–74, 2021.
  37. C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273–297, Sep 1995. doi: 10 . 1007/BF00994018
  38. A validity perspective on evaluating the justified use of data-driven decision-making algorithms. In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), pp. 690–704. IEEE, 2023.
  39. Toward cultural bias evaluation datasets: The case of Bengali gender, religious, and national identity. In S. Dev, V. Prabhakaran, D. Adelani, D. Hovy, and L. Benotti, eds., Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), pp. 68–83. Association for Computational Linguistics, Dubrovnik, Croatia, May 2023. doi: 10 . 18653/v1/2023 . c3nlp-1 . 8
  40. A. J. Dettlaff and R. Boyd. Racial disproportionality and disparities in the child welfare system: Why do they exist, and what can be done to address them? The ANNALS of the American Academy of Political and Social Science, 692(1):253–274, 2020. doi: 10 . 1177/0002716220980329
  41. Disentangling substantiation: The influence of race, income, and risk on the substantiation decision in child welfare. Children and Youth Services Review, 33(9):1630–1637, 2011.
  42. J. Dressel and H. Farid. The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1):eaao5580, 2018. doi: 10 . 1126/sciadv . aao5580
  43. Edx. Adult-Adolescent Parenting Inventory-2 (AAPI-2), 2015.
  44. V. Eubanks. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, 2018.
  45. Family Development Resources. Inventory scoring system for assessing parenting practices.
  46. Examining risks of racial biases in NLP tools for child protective services. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’23, p. 1479–1492. Association for Computing Machinery, New York, NY, USA, 2023. doi: 10 . 1145/3593013 . 3594094
  47. Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203):1502–1505, 2014. doi: 10 . 1126/science . 1255484
  48. A review on ensembles for the class imbalance problem: Bagging-, boosting-, and hybrid-based approaches. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(4):463–484, 2012. doi: 10 . 1109/TSMCC . 2011 . 2161285
  49. J. M. Geiger and L. Schelbe. Foster care placement. In The Handbook on Child Welfare Practice, pp. 219–248. Springer, 2021.
  50. The devil is in the details: Interrogating values embedded in the allegheny family screening tool. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’23, p. 1292–1310. Association for Computing Machinery, New York, NY, USA, 2023. doi: 10 . 1145/3593013 . 3594081
  51. P. Gillingham. Decision-making about the adoption of information technology in social welfare agencies: Some key considerations. European Journal of Social Work, 21(4):521–529, 2018.
  52. J. D. Goldhaber-Fiebert and L. Prince. Impact evaluation of a predictive risk modeling tool for Allegheny county’s child welfare office. Technical report, 2019.
  53. The hidden inconsistencies introduced by predictive algorithms in judicial decision making. arXiv preprint arXiv:2012.00289, 2020.
  54. S. Ho and G. Burke. An algorithm that screens for child neglect raises concerns., 2022.
  55. Work of the unemployed: An inquiry into individuals’ experience of data usage in public services and possibilities for their agency. In Designing Interactive Systems Conference 2021, DIS ’21, p. 438–448. Association for Computing Machinery, New York, NY, USA, 2021. doi: 10 . 1145/3461778 . 3462003
  56. Shifting concepts of value: Designing algorithmic decision-support systems for public services. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, pp. 1–12, 2020.
  57. Hornby Zeller Associates INC. Allegheny county predictive risk modeling tool implementation: Process evaluation. Technical report, 2018.
  58. The meaning and measurement of bias: Lessons from natural language processing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, p. 706. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10 . 1145/3351095 . 3375671
  59. N. Japkowicz and S. Stephen. The class imbalance problem: A systematic study. Intelligent Data Analysis, pp. 429–449, 2002.
  60. Improving human-AI partnerships in child welfare: Understanding worker practices, challenges, and desires for algorithmic decision support. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22. Association for Computing Machinery, New York, NY, USA, 2022. doi: 10 . 1145/3491102 . 3517439
  61. “Why do I care what’s similar?” Probing challenges in AI-assisted child welfare decision-making through worker-AI interface design concepts. In Designing Interactive Systems Conference, DIS ’22, p. 454–470. Association for Computing Machinery, New York, NY, USA, 2022. doi: 10 . 1145/3532106 . 3533556
  62. E. Keddell. The ethics of predictive risk modelling in the Aotearoa/New Zealand child welfare context: Child abuse prevention or neo-liberal tool? Critical Social Policy, 35(1):69–88, 2015. doi: 10 . 1177/0261018314543224
  63. R. S. Kirk. Psychometric properties of the trauma and post-trauma well-being assessment domains of the North Carolina Family Assessment Scale for General and Reunification Services (NCFAS G+R). Journal of Public Child Welfare, 9(5):444–462, 2015. doi: 10 . 1080/15548732 . 2015 . 1090364
  64. R. S. Kirk and K. Reed-Ashcraft. NCFAS North Carolina Family Assessment Scale Research Report. National Family Preservation Network, pp. 1–22, 2004.
  65. Retention of child welfare caseworkers: The role of case severity and workplace resources. Children and Youth Services Review, 126:106039, 2021.
  66. The misuse of AUC: What high impact risk assessment gets wrong. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’23, p. 1570–1583. Association for Computing Machinery, New York, NY, USA, 2023. doi: 10 . 1145/3593013 . 3594100
  67. North Carolina Family Assessment Scale: Measurement Properties for Youth Mental Health Services. Research on Social Work Practice, 20(2):202–211, 2010. doi: 10 . 1177/1049731509334180
  68. Y. Lei. 3 - Individual intelligent method-based fault diagnosis. In Y. Lei, ed., Intelligent Fault Diagnosis and Remaining Useful Life Prediction of Rotating Machinery, pp. 67–174. Butterworth-Heinemann, 2017. doi: 10 . 1016/B978-0-12-811534-3 . 00003-2
  69. Imbalanced-learn: A Python toolbox to tackle the curse of imbalanced datasets in machine learning. Journal of Machine Learning Research, 18(17):1–5, 2017.
  70. AUC: A misleading measure of the performance of predictive distribution models. Global Ecology and Biogeography, 17(2):145–151, 2008. doi: 10 . 1111/j . 1466-8238 . 2007 . 00358 . x
  71. P. Marks. Algorithmic hiring needs a human face. Commun. ACM, 65(3):17–19, feb 2022. doi: 10 . 1145/3510552
  72. A human-centered review of algorithms in decision-making in higher education. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23. Association for Computing Machinery, New York, NY, USA, 2023. doi: 10 . 1145/3544548 . 3580658
  73. Assessing risk: A comparison of tools for child welfare practice with Indigenous families, may 2017.
  74. E. S. Y. Moon and S. Guha. A human-centered review of algorithms in homelessness research. 2024. doi: 10 . 1145/3613904 . 3642392
  75. Towards a non-ideal methodological framework for responsible ml, 2024.
  76. Machine learning and grounded theory method: Convergence, divergence, and combination. In Proceedings of the 19th International Conference on Supporting Group Work, pp. 3–8. ACM, 2016.
  77. National Family Preservation Network. NCFAS North Carolina Family Assessment Scale Scale & Definitions (v. 2.0). 2009.
  78. How we do things with words: Analyzing text as social and cultural data. Frontiers in Artificial Intelligence, 3:62, 2020.
  79. L. H. Nguyen and S. Holmes. Ten quick tips for effective dimensionality reduction. JPLoS Comput Biol., 15(6), june 2019. doi: 10 . 1371/journal . pcbi . 1006907
  80. W. S. Noble. What is a support vector machine? Nature Biotechnology, 24(12):1565–1567, Dec 2006. doi: 10 . 1038/nbt1206-1565
  81. A Collaborative Way of Knowing: Bridging Computational Communication Research and Grounded Theory Ethnography. Journal of Communication, 70(3):447–472, 2020.
  82. The fallacy of AI functionality. In 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, p. 959–972. Association for Computing Machinery, New York, NY, USA, Jun 2022. doi: 10 . 1145/3531146 . 3533158
  83. Datafied child welfare services: Unpacking politics, economics and power. Policy Studies, 41(5):507–526, 2020.
  84. Confidence and professional judgment in assessing children’s risk of abuse. Research on Social Work Practice, 20(6):621–628, 2010. doi: 10 . 1177/1049731510368050
  85. D. E. Roberts. Racial harm: Dorothy Roberts explains how racism works in the child welfare system. Colorlines, 5(3):19, Fall 2002.
  86. D. E. Roberts. Torn Apart How the Child Welfare System Destroys Black Families–and How Abolition Can Build a Safer World. Basic Books, New York, NY, USA, 2022.
  87. Modeling assumptions clash with the real world: Transparency, equity, and community challenges for student assignment algorithms. arXiv preprint arXiv:2101.10367, 2021.
  88. S. Robertson and N. Salehi. What if I don’t like any of the choices? The limits of preference elicitation for participatory algorithm design. arXiv preprint arXiv:2007.06718, 2020.
  89. Family surveillance by algorithm: The rapidly spreading tools few have heard of, 2021.
  90. Child welfare system: Interaction of policy, practice and algorithms. In Companion Proceedings of the 2020 ACM International Conference on Supporting Group Work, pp. 119–122, 2020.
  91. A framework of high-stakes algorithmic decision-making for the public sector developed through a case study of child-welfare. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 2021.
  92. A human-centered review of algorithms used within the us child welfare system. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–15, 2020.
  93. D. Saxena and S. Guha. Algorithmic harms in child welfare: Uncertainties in practice, organization, and street-level decision-making. ACM J. Responsib. Comput., sep 2023. doi: 10 . 1145/3616473
  94. Rethinking ”Risk” in algorithmic systems through a computational narrative analysis of casenotes in child-welfare. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1–19, 2023.
  95. Unpacking invisible work practices, constraints, and latent power relationships in child welfare through casenote analysis. CHI ’22. Association for Computing Machinery, New York, NY, USA, 2022. doi: 10 . 1145/3491102 . 3517742
  96. How to train a (bad) algorithmic caseworker: A quantitative deconstruction of risk assessments in child welfare. In CHI Conference on Human Factors in Computing Systems Extended Abstracts, pp. 1–7, 2022.
  97. C. S. Schwalbe. Strengthening the integration of actuarial risk assessment with clinical judgment in an evidence based practice framework. Children and Youth Services Review, 30(12):1458–1464, 2008.
  98. Auditing risk prediction of long-term unemployment. Proc. ACM Hum.-Comput. Interact., 6(GROUP), jan 2022. doi: 10 . 1145/3492827
  99. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* ’19, p. 59–68. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10 . 1145/3287560 . 3287598
  100. A. Shlonsky and D. Wagner. The next step: Integrating actuarial risk assessment and clinical judgment into an evidence-based practice framework in cps case management. Children and youth services review, 27(4):409–427, 2005.
  101. B. Shneiderman. Human-Centered AI. Oxford University Press, 01 2022. doi: 10 . 1093/oso/9780192845290 . 001 . 0001
  102. N. L. Sidell. Social Work Documentation. NASW Press, Washington, DC, USA, 2015.
  103. A strengths-based approach to supervised visitation in child welfare. Child Care in Practice, 20(1):98–119, 2014.
  104. Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders. In FAccT ’22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21 - 24, 2022, pp. 1162–1177. ACM, 2022. doi: 10 . 1145/3531146 . 3533177
  105. Z. Strassburger. Medical decision making for youth in the foster care system. J. Marshall L. Rev., 49(4):1103–1154, 2016.
  106. D. G. N. Y. Times. San Francisco had an ambitious plan to tackle school segregation. It made it worse., April 2019.
  107. Using a machine learning tool to support high-stakes decisions in child protection. AI Magazine, 42(1):53–60, 2021.
  108. Hospital Injury Encounters of Children Identified by a Predictive Risk Model for Screening Child Maltreatment Referrals: Evidence From the Allegheny Family Screening Tool. JAMA pediatrics, 174(11):e202770, Nov 2020. doi: 10 . 1001/jamapediatrics . 2020 . 2770
  109. Developing predictive models to support child maltreatment hotline screening decisions: Allegheny county methodology and implementation. Center for Social data Analytics, 2017.
  110. M. Veale and I. Brass. Administration by algorithm? Public management meets public sector machine learning. (3375391), 2019.
  111. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 chi conference on human factors in computing systems, pp. 1–14, 2018.
  112. M. Wieringa. What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, p. 1–18. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10 . 1145/3351095 . 3372833
  113. B. Williamson. Digital education governance: Data visualization, predictive analytics, and ‘real-time’policy instruments. Journal of Education Policy, 31(2):123–141, 2016.
  114. RepliCHI - CHI Should Be Replicating and Validating Results More: Discuss. In CHI ’11 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’11, p. 463–466. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10 . 1145/1979742 . 1979491
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Erina Seh-Young Moon (6 papers)
  2. Devansh Saxena (14 papers)
  3. Tegan Maharaj (22 papers)
  4. Shion Guha (23 papers)