The Narrow Depth and Breadth of Corporate Responsible AI Research (2405.12193v1)
Abstract: The transformative potential of AI presents remarkable opportunities, but also significant risks, underscoring the importance of responsible AI development and deployment. Despite a growing emphasis on this area, there is limited understanding of industry's engagement in responsible AI research, i.e., the critical examination of AI's ethical, social, and legal dimensions. To address this gap, we analyzed over 6 million peer-reviewed articles and 32 million patent citations using multiple methods across five distinct datasets to quantify industry's engagement. Our findings reveal that the majority of AI firms show limited or no engagement in this critical subfield of AI. We show a stark disparity between industry's dominant presence in conventional AI research and its limited engagement in responsible AI. Leading AI firms exhibit significantly lower output in responsible AI research compared to their conventional AI research and the contributions of leading academic institutions. Our linguistic analysis documents a narrower scope of responsible AI research within industry, with a lack of diversity in key topics addressed. Our large-scale patent citation analysis uncovers a pronounced disconnect between responsible AI research and the commercialization of AI technologies, suggesting that industry patents rarely build upon insights generated by the responsible AI literature. This gap highlights the potential for AI development to diverge from a socially optimal path, risking unintended consequences due to insufficient consideration of ethical and societal implications. Our results highlight the urgent need for industry to publicly engage in responsible AI research to absorb academic knowledge, cultivate public trust, and proactively mitigate AI-induced societal harms.
- The grey hoodie project: Big tobacco, big tech, and the threat on academic integrity. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’21, page 287–297, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3461702.3462563.
- Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, page 252–260, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3351095.3372871.
- Acemoglu, D. (2021). Harms of AI. Working Paper 29247, National Bureau of Economic Research. http://www.nber.org/papers/w29247.
- The Environment and Directed Technical Change. American Economic Review, 102(1):131–166. https://www.aeaweb.org/articles?id=10.1257/aer.102.1.131.
- Regulating transformative technologies. Working Paper 31461, National Bureau of Economic Research. http://www.nber.org/papers/w31461.
- Privacy and human behavior in the age of information. Science, 347(6221):509–514. https://www.science.org/doi/abs/10.1126/science.aaa1465.
- Responsible AI principles: Human capital’s role in the adoption of socially responsible activities. MIT Sloan Working Paper. https://www.darden.virginia.edu/sites/default/files/inline-files/2401%20Ahmed%2C%20Nur.pdf.
- The de-democratization of AI: Deep learning and the compute divide in artificial intelligence research. arXiv preprint arXiv:2010.15581. https://doi.org/10.48550/arXiv.2010.15581.
- The growing influence of industry in AI research. Science, 379(6635):884–886. https://doi.org/10.1126/science.ade2420.
- Designing for human rights in AI. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720949566.
- Walking the walk of AI ethics: Organizational challenges and the individualization of risk among ethics entrepreneurs. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’23, page 217–226, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3593013.3593990.
- The decline of science in corporate R&D. Strategic Management Journal, 39(1):3–32. https://doi.org/10.1002/smj.2693.
- Knowledge spillovers and corporate investment in scientific research. American Economic Review, 111(3):871–98. https://www.aeaweb.org/articles?id=10.1257/aer.20171742.
- Arthur, W. B. (2009). The Nature of Technology: What It is and How It Evolves. Free Press, New York. https://doi.org/10.1016/j.futures.2010.08.015.
- The role of cooperation in responsible AI development. arXiv preprint arXiv:1907.04534. https://doi.org/10.48550/arXiv.1907.04534.
- AI ethics are in danger. Funding independent research could help. Stanford Social Innovation Review. https://doi.org/10.48558/VCAT-NN16.
- When not to design, build, or deploy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, page 695, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3351095.3375691.
- Like stars: How firms learn at scientific conferences. Management Science. https://dx.doi.org/10.2139/ssrn.3644106.
- Identifying and measuring developments in artificial intelligence. OECD Science, Technology and Industry Working Papers. https://www.oecd-ilibrary.org/content/paper/5f65ff7e-en.
- Belfield, H. (2020). Activism by the ai community: Analysing recent achievements and future prospects. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES ’20, page 15–21, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3375627.3375814.
- SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615–3620, Hong Kong, China. Association for Computational Linguistics. https://aclanthology.org/D19-1371.
- Benkler, Y. (2019). Don’t let industry write the rules for AI. Nature, 569:161. https://www.nature.com/articles/d41586-019-01413-1.
- Bijker, W. E. (1997). Of Bicycles, Bakelites, and Bulbs: Toward A Theory of Sociotechnical Change. MIT press.
- Shaping Technology/Building Society: Studies in Sociotechnical Change. MIT press.
- Bikard, M. (2018). Made in academia: The effect of institutional origin on inventors’ attention to science. Organization Science, 29(5):818–836. https://doi.org/10.1287/orsc.2018.1206.
- The values encoded in machine learning research. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, page 173–184, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3531146.3533083.
- On hate scaling laws for data-swamps. arXiv preprint arXiv:2306.13141. https://doi.org/10.48550/arXiv.2306.13141.
- Blei, D. M. (2012). Probabilistic topic models. Communications of the ACM, 55(4):77–84. https://doi.org/10.1145/2133806.2133826.
- Latent dirichlet allocation. In Advances in Neural Information Processing Systems, volume 14. MIT Press. https://proceedings.neurips.cc/paper_files/paper/2001/file/296472c9542ad4d4788d543508116cbc-Paper.pdf.
- The foundation model transparency index. arXiv preprint arXiv:2310.12941. https://doi.org/10.48550/arXiv.2310.12941.
- Bruckner, M. A. (2018). The promise and perils of algorithmic lenders’ use of big data. Chicago-Kent Law Review, 93:3. https://scholarship.kentlaw.iit.edu/cklawreview/vol93/iss1/1.
- Toward trustworthy AI development: Mechanisms for supporting verifiable claims. arXiv.org. https://arxiv.org/abs/2004.07213.
- Brynjolfsson, E. (2022). The Turing trap: The promise & peril of human-like artificial intelligence. Daedalus, 151(2):272–287. https://doi.org/10.1162/daed_a_01915.
- Does machine translation affect international trade? Evidence from a large digital platform. Management Science, 65(12):5449–5460. https://doi.org/10.1287/mnsc.2019.338.
- Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research, pages 77–91. PMLR. https://proceedings.mlr.press/v81/buolamwini18a.html.
- Breaking out of the ivory tower: A large-scale analysis of patent citations to HCI research. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3544548.3581108.
- Ensemble selection from libraries of models. In Proceedings of the Twenty-First International Conference on Machine Learning, ICML ’04, page 18, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/1015330.1015432.
- Artificial intelligence and the ‘good society’: the US, EU, and UK approach. Science and Engineering Ethics, 24:505–528. https://doi.org/10.1007/s11948-017-9901-7.
- XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, page 785–794, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/2939672.2939785.
- The Impact of Artificial Intelligence on Innovation: An Exploratory Analysis, pages 115–148. University of Chicago Press, Chicago. https://doi.org/10.7208/9780226613475-006.
- Regulating advanced artificial agents. Science, 384(6691):36–38. https://www.science.org/doi/abs/10.1126/science.adl0625.
- Absorptive capacity: A new perspective on learning and innovation. Administrative Science Quarterly, 35(1):128–152. https://doi.org/10.2307/2393553.
- Protecting their intellectual assets: Appropriability conditions and why U.S. manufacturing firms patent (or not). Working Paper 7552, National Bureau of Economic Research. http://www.nber.org/papers/w7552.
- Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nature Medicine, 24(10):1559–1567. https://doi.org/10.1038/s41591-018-0177-5.
- Artificial intelligence regulation: A framework for governance. Ethics and Information Technology, 23(3):505–525. https://doi.org/10.1007/s10676-021-09593-z.
- de Laat, P. B. (2021). Companies committed to responsible AI: From principles towards implementation and regulation? Philosophy & Technology, 34:1135–1193. https://doi.org/10.1007/s13347-021-00474-3.
- Denzin, N. K. (2017). The Research Act: A Theoretical Introduction to Sociological Methods. Routledge.
- BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. https://doi.org/10.48550/arXiv.1810.04805.
- Algorithmic harms and algorithmic wrongs. Forthcoming in FAccT’24. https://osf.io/xzpsw.
- Dietterich, T. G. (2000). Ensemble methods in machine learning. In International Workshop on Multiple Classifier Systems, pages 1–15. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45014-9_1.
- The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1):eaao5580. https://www.science.org/doi/abs/10.1126/sciadv.aao5580.
- Towards measuring the representation of subjective global opinions in language models. arXiv preprint arXiv:2306.16388. https://doi.org/10.48550/arXiv.2306.16388.
- The algorithmic imprint. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, page 1305–1317, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3531146.3533186.
- Closing the AI knowledge gap. arXiv preprint arXiv:1803.07233.
- Governing AI safety through independent audits. Nature Machine Intelligence, 3(7):566–571. https://doi.org/10.1038/s42256-021-00370-7.
- Tech layoffs ravage the teams that fight online misinformation and hate speech. CNBC. https://www.cnbc.com/2023/05/26/tech-companies-are-laying-off-their-ethics-and-safety-teams-.html.
- Fowler, G. A. (2023). Snapchat tried to make a safe AI. It chats with me about booze and sex. The Washington Post. https://www.washingtonpost.com/technology/2023/03/14/snapchat-myai/.
- The evolution of citation graphs in artificial intelligence research. Nature Machine Intelligence, 1(2):79–85. https://www.nature.com/articles/s42256-019-0024-5.
- Predictability and surprise in large generative models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, pages 1747–1764, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3531146.3533229.
- When fair classification meets noisy protected attributes. AIES ’23, page 679–690, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3600211.3604707.
- Design values in action: Toward a theory of value dilution. In Proceedings of the 2023 ACM Designing Interactive Systems Conference, DIS ’23, page 2347–2361, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3563657.3596122.
- Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.48550/arXiv.1311.2524.
- Artificial intelligence, education, and entrepreneurship. The Journal of Finance. https://onlinelibrary.wiley.com/doi/abs/10.1111/jofi.13302.
- Hao, K. (2021). Inside the fight to reclaim AI from big tech’s control. MIT Technology Review. https://www.technologyreview.com/2021/06/14/1026148/ai-big-tech-timnit-gebru-paper-ethics.
- Adding structure to AI harm: An introduction to CSET’s AI harm framework. Center for Security and Emerging Technology Publications. https://cset.georgetown.edu/publication/adding-structure-to-ai-harm.
- Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, page 1–16, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3290605.3300830.
- Algorithmic amplification of politics on Twitter. Proceedings of the National Academy of Sciences, 119(1):e2025334119. https://www.pnas.org/doi/abs/10.1073/pnas.2025334119.
- Jazwinska, K. (2022). The tech industry controls CS conference funding. What are the dangers? Freedom to Tinker. https://freedom-to-tinker.com/2022/03/11/the-tech-industry-controls-cs-conference-funding-what-are-the-dangers/.
- The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9):389–399. https://doi.org/10.1038/s42256-019-0088-2.
- Johnson, G. M. (2023). Are algorithms value-free? Feminist theoretical virtues in machine learning. Journal of Moral Philosophy, pages 1 – 35. https://brill.com/view/journals/jmp/aop/article-10.1163-17455243-20234372/article-10.1163-17455243-20234372.xml.
- The privatization of AI research(-ers): Causes and potential consequences–from university-industry interaction to public research brain-drain? arXiv preprint arXiv:2102.01648. https://doi.org/10.48550/arXiv.2102.01648.
- Kang, C. (2023). OpenAI’s Sam Altman urges A.I. regulation in senate hearing. The New York Times. https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html.
- The challenge of understanding what users want: Inconsistent preferences and engagement optimization. Management Science. https://doi.org/10.1287/mnsc.2022.03683.
- A narrowing of AI research? arXiv preprint arXiv:2009.10385. https://doi.org/10.48550/arXiv.2009.10385.
- The sanction of authority: Promoting public trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 262–271, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3442188.3445890.
- Reduced, reused and recycled: The life of a dataset in machine learning research. arXiv preprint arXiv:2112.01716. https://doi.org/10.48550/arXiv.2112.01716.
- Aligned with whom? direct and social goals for AI systems. NBER Working Paper No. w30017. https://doi.org/10.3386/w30017.
- An action-oriented ai policy toolkit for technology audits by community advocates and activists. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 772–781, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3442188.3445938.
- Four years of FAccT: A reflexive, mixed-methods analysis of research contributions, shortcomings, and future prospects. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, page 401–426, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3531146.3533107.
- Lazar, S. (2022). Power and AI: Nature and Justification. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197579329.013.12.
- AI safety on whose terms? Science, 381(6654):138–138. https://www.science.org/doi/abs/10.1126/science.adi8982.
- A hybrid model combining convolutional neural network with XGBoost for predicting social media popularity. In Proceedings of the 25th ACM International Conference on Multimedia, MM ’17, page 1912–1917, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3123266.3127902.
- RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. https://doi.org/10.48550/arXiv.1907.11692.
- Computer-assisted text analysis for comparative politics. Political Analysis, 23(2):254–277. https://doi.org/10.1093/pan/mpu019.
- MacQueen, J. (1967). Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 281–297. Oakland, CA, USA.
- Mansfield, E. (1991). Academic research and industrial innovation. Research Policy, 20(1):1–12. https://www.sciencedirect.com/science/article/pii/004873339190080A.
- Martin, K. (2019). Designing ethical algorithms. MIS Quarterly Executive June. http://dx.doi.org/10.17705/2msqe.00012.
- Reliance on science: Worldwide front-page patent citations to scientific articles. Strategic Management Journal, 41(9):1572–1594. https://doi.org/10.1002/smj.3145.
- The AI index 2023 annual report. Technical report, AI Index Steering Committee, Institute for Human-Centered AI, Stanford University. https://aiindex.stanford.edu/ai-index-report-2023/.
- The AI index 2024 annual report. Technical report, AI Index Steering Committee, Institute for Human-Centered AI, Stanford University. https://aiindex.stanford.edu/report/.
- Mason, P. (2016). The racist hijacking of Microsoft’s chatbot shows how the internet teems with hate. The Guardian. https://www.theguardian.com/world/2016/mar/29/microsoft-tay-tweets-antisemitic-racism.
- Psychological targeting as an effective approach to digital mass persuasion. Proceedings of the National Academy of Sciences, 114(48):12714–12719. https://www.pnas.org/doi/abs/10.1073/pnas.1710966114.
- Meho, L. I. (2019). Using Scopus’s CiteScore for assessing the quality of computer science conferences. Journal of Informetrics, 13(1):419–433. https://doi.org/10.1016/j.joi.2019.02.006.
- Merrick, L. (2019). Randomized ablation feature importance. arXiv preprint arXiv:1910.00174. https://doi.org/10.48550/arXiv.1910.00174.
- Owning ethics: Corporate logics, Silicon Valley, and the institutionalization of ethics. Social Research: An International Quarterly, 86(2):449–476. https://doi.org/10.1353/sor.2019.0022.
- Microsoft (2024). Responsible AI transparency report. Technical report, Microsoft. https://aka.ms/RAITransparencyReport2024PDF.
- Using supervised machine learning for large-scale classification in management research: The case for identifying artificial intelligence patents. Strategic Management Journal, 44(2):491–519. https://onlinelibrary.wiley.com/doi/abs/10.1002/smj.3441.
- Mitre (2023). MITRE-Harris poll finds lack of trust among Americans in AI technology. MITRE. https://www.mitre.org/news-insights/news-release/mitre-harris-poll-finds-lack-trust-among-americans-ai-technology.
- Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11):501–507. https://www.nature.com/articles/s42256-019-0114-4.
- Ethics-based auditing of automated decision-making systems: Nature, scope, and limitations. Science and Engineering Ethics, 27(4):44. https://doi.org/10.1007/s11948-021-00319-4.
- AI playground: Unreal engine-based data ablation tool for deep learning. In Advances in Visual Computing, Lecture Notes in Computer Science, pages 518–532. Springer International Publishing. https://doi.org/10.48550/arXiv.2007.06153.
- Operationalizing framing to support multiperspective recommendations of opinion pieces. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 478–488, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3442188.3445911.
- The Role of Workers in AI Ethics and Governance. In The Oxford Handbook of AI Governance. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197579329.013.68.
- Neff, G. (2020). From bad users and failed uses to responsible technologies: A call to expand the AI ethics toolkit. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES ’20, page 5–6, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3375627.3377141.
- Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381(6654):187–192. https://www.science.org/doi/abs/10.1126/science.adh2586.
- Ochigame, R. (2019). The invention of “ethical AI”: How big tech manipulates academia to avoid regulation. The Intercept. https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/.
- Png, M.-T. (2022). At the tensions of south and north: Critical roles of global south stakeholders in ai governance. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, page 1434–1445, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3531146.3533200.
- The Ethics of the Ethics of AI. In The Oxford Handbook of Ethics of AI. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.2.
- Raghavan, M. (2023). Inherent Tradeoffs in the Fair Determination of Risk Scores. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3603195.3603197.
- Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’19, page 429–435, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3306618.3314244.
- Where responsible ai meets reality: Practitioner perspectives on enablers for shifting organizational practices. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1). https://doi.org/10.1145/3449081.
- To be a responsible AI leader, focus on being responsible. MIT Sloan Management Review. https://sloanreview.mit.edu/projects/to-be-a-responsible-ai-leader-focus-on-being-responsible/.
- Beyond PageRank: Machine learning for static ranking. In Proceedings of the 15th International Conference on World Wide Web, WWW ’06, page 707–715. Association for Computing Machinery. https://doi.org/10.1145/1135777.1135881.
- The structural topic model and applied social science. In Advances in Neural Information Processing Systems Workshop on Topic Models: Computation, Application, and Evaluation, volume 4, pages 1–20. Harrahs and Harveys, Lake Tahoe. https://projects.iq.harvard.edu/files/wcfia/files/stmnips2013.pdf.
- Rosenberg, N. (1969). The direction of technological change: Inducement mechanisms and focusing devices. Economic Development and Cultural Change, 18(1):1–24. https://www.journals.uchicago.edu/doi/abs/10.1086/450399.
- Rudner, R. (1953). The scientist qua scientist makes value judgments. Philosophy of Science, 20(1):1–6. https://doi.org/10.1086/667842.
- Link recommendation algorithms and dynamics of polarization in online social networks. Proceedings of the National Academy of Sciences, 118(50):e2102141118. https://www.pnas.org/doi/abs/10.1073/pnas.2102141118.
- AutoAblation: Automated parallel ablation studies for deep learning. In Proceedings of the 1st Workshop on Machine Learning and Systems, EuroMLSys ’21, page 55–61, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3437984.3458834.
- Tabular data: Deep learning is not all you need. Information Fusion, 81:84–90. https://doi.org/10.1016/j.inffus.2021.11.011.
- Simonite, T. (2020). The dark side of big tech’s funding for AI research. WIRED. https://www.wired.com/story/dark-side-big-tech-funding-ai-research/.
- Smith, B. (2023). Meeting the AI moment: Advancing the future through responsible AI. Microsoft Blogs. https://blogs.microsoft.com/on-the-issues/2023/02/02/responsible-ai-chatgpt-artificial-intelligence/.
- How AI can be a force for good. Science, 361(6404):751–752. https://www.science.org/doi/abs/10.1126/science.aat5991.
- The White House (2023a). Executive order on the safe, secure, and trustworthy development and use of artificial intelligence. https://t.ly/AedKR.
- The White House (2023b). FACT SHEET: Biden-Harris administration secures voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI. https://t.ly/2YRl4.
- Patents, Citations & Innovations: A Window on the Knowledge Economy. The MIT Press. https://doi.org/10.7551/mitpress/5263.001.0001.
- Review of Classifier Combination Methods, pages 361–386. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-76280-5_14.
- Van de Poel, I. (2020). Embedding values in artificial intelligence (AI) systems. Minds and Machines, 30(3):385–409. https://doi.org/10.1007/s11023-020-09537-4.
- Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
- Propagation of societal gender inequality by internet search algorithms. Proceedings of the National Academy of Sciences, 119(29):e2204529119. https://www.pnas.org/doi/abs/10.1073/pnas.2204529119.
- As AI booms, tech firms are laying off their ethicists. The Washington Post. https://www.washingtonpost.com/technology/2023/03/30/tech-companies-cut-ai-ethics/.
- Measuring algorithmically infused societies. Nature, 595(7866):197–204. https://doi.org/10.1038/s41586-021-03666-1.
- Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1):121–136. https://doi.org/10.1177/030631299029003004.
- Confronting power and corporate capture at the FAccT conference. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, page 1375–1386, New York, NY, USA. Association for Computing Machinery. https://doi.org/10.1145/3531146.3533194.
- Absorptive capacity: A review, reconceptualization, and extension. Academy of Management Review, 27(2):185–203. https://doi.org/10.5465/amr.2002.6587995.
- The AI index 2022 annual report. Technical report, AI Index Steering Committee, Institute for Human-Centered AI, Stanford University. https://aiindex.stanford.edu/ai-index-report-2022/.
- Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1):75–89. https://doi.org/10.1057/jit.2015.5.
- Nur Ahmed (2 papers)
- Amit Das (28 papers)
- Kirsten Martin (1 paper)
- Kawshik Banerjee (1 paper)