A University Framework for the Responsible use of Generative AI in Research (2404.19244v1)
Abstract: Generative Artificial Intelligence (generative AI) poses both opportunities and risks for the integrity of research. Universities must guide researchers in using generative AI responsibly, and in navigating a complex regulatory landscape subject to rapid change. By drawing on the experiences of two Australian universities, we propose a framework to help institutions promote and facilitate the responsible use of generative AI. We provide guidance to help distil the diverse regulatory environment into a principles-based position statement. Further, we explain how a position statement can then serve as a foundation for initiatives in training, communications, infrastructure, and process change. Despite the growing body of literature about AI's impact on academic integrity for undergraduate students, there has been comparatively little attention on the impacts of generative AI for research integrity, and the vital role of institutions in helping to address those challenges. This paper underscores the urgency for research institutions to take action in this area and suggests a practical and adaptable framework for so doing.
- AAPS Open Editors (2023). AAPS Open.
- The growing influence of industry in AI research. Science 379(6635), 884–886. Publisher: American Association for the Advancement of Science.
- ALLEA (2023, June). The European Code of Conduct for Research Integrity. Germany: ALLEA - All European Academies.
- Ananya (2024, March). AI image generators often give racist and sexist results: can they be fixed? Nature 627(8005), 722–725.
- Australian Qualifications Framework Council (2013). The Australian Qualifications Framework Second Edition January 2013.
- Australian Research Council Research Policy Branch (2023). Policy on Use of Generative Artificial Intelligence in the ARC’s grant programs.
- Australian Universities Accord (2024, February). Australian Universities Accord Final Report. Text, Australian Government Department of Education, Canberra, Australia.
- Barr, D. P. and P. M. Taylor (2022). APEC Guiding Principles for Research Integrity.
- Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology 54(5), 1160–1173.
- Preparing University Assessment for a World with AI: Tasks for Human Intelligence. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, and D. Boud (Eds.), Re-imagining University Assessment in a Digital World, The Enabling Power of Assessment, pp. 49–63. Cham: Springer International Publishing.
- Rapid Response Information Report: Generative AI. Technical report, Australian Council of Learned Academies, Acton, ACT.
- Artificial intelligence applications in fake review detection: Bibliometric analysis and future avenues for research. Journal of Business Research 158, 113631.
- Safeguarding academic integrity, connecting law students with markers: Assessment via viva voce.
- Managing the mutations: academic misconduct in Australia, New Zealand and the UK. International Journal for Educational Integrity 16(1), 1–15. Number: 1 Publisher: BioMed Central.
- On the Opportunities and Risks of Foundation Models. arXiv:2108.07258 [cs].
- Contract cheating: a survey of Australian university students. Studies in Higher Education 44(11), 1837–1856.
- Tortured phrases: A dubious writing style emerging in science. Evidence of critical issues affecting established journals. arXiv:2107.06751 [cs].
- Adapting peer review for the future: Digital disruptions and trust in peer review. Learned Publishing 37(1), 49–54.
- Beyond prompt brittleness: Evaluating the reliability and consistency of political worldviews in LLMs. arXiv:2402.17649 [cs].
- ChatGPT Is a Blurry JPEG of the Web.
- Teachers’ attitudes towards chatbots in education: a technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educational Studies 49(2), 295–313.
- Committee on Publication Ethics (2024). COPE Guidelines.
- Encouraging Ethics and Challenging Corruption. Alexandria, NSW: The Federation Press.
- COPE & STM (2022, June). Paper mills research. Technical report, Committee on Publication Ethics and STM.
- Artificial intelligence in higher education: the state of the field. International Journal of Educational Technology in Higher Education 20(1), 22.
- Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality.
- Mind over machine: the power of human intuition and expertise in the era of the computer. New York: Free Press. OCLC: 12554464.
- Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management 71, 102642.
- Education (2011). Tertiary Education Quality and Standards Agency Act 2011. Archive Location: au Publisher: Attorney-General’s Department.
- The infernal business of contract cheating: understanding the business processes and models of academic custom writing sites. International Journal for Educational Integrity 14(1), 1–21. Number: 1 Publisher: BioMed Central.
- On the importance of research ethics and mentoring. The American journal of bioethics: AJOB 2(4), 50–51.
- ChatGPT: Post-ASU+GSV Reflections on Generative AI.
- Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. BMJ 384, e077192. Publisher: British Medical Journal Publishing Group Section: Research.
- Empowering learners for the age of artificial intelligence. Computers and Education: Artificial Intelligence 4, 100130.
- Could machine learning fuel a reproducibility crisis in science? Nature 608(7922), 250–251.
- What the EU’s tough AI law means for research and ChatGPT. Nature 626(8001), 938–939.
- Heterogeneity in European Research Integrity Guidance: Relying on Values or Norms? Journal of Empirical Research on Human Research Ethics 9(3), 79–90.
- OpenAI says mysterious chat histories resulted from account takeover.
- Dual use concerns of generative AI and large language models. Journal of Responsible Innovation 11(1), 2304381.
- BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv:2203.05794 [cs].
- Haslam, T. J. (2017). Wikipedia and the Humanities in Higher Education: Past Time to Renegotiate the Relationship. International Journal of Information and Education Technology 7(4), 246–251.
- Ethical Considerations for Artificial Intelligence in Medical Imaging: Deployment and Governance. Journal of Nuclear Medicine 64(10), 1509–1515.
- High Council for Evaluation of Research and Higher Education (2019, January). French Charter for Research Integrity.
- Peer review of grant applications: a harbinger for mediocrity in clinical research? The Lancet 348(9037), 1293–1295.
- Hypotheses devised by AI could find ‘blind spots’ in research. Nature.
- New AI classifier for indicating AI-written text.
- Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers.
- Nederlandse gedragscode wetenschappelijke integriteit.
- Leading Change: Why Transformation Efforts Fail. Harvard Business Review. Section: Leadership.
- Scientific Integrity Principles and Best Practices: Recommendations from a Scientific Integrity Consortium. Science and Engineering Ethics 25(2), 327–355.
- GPT detectors are biased against non-native English writers. arXiv:2304.02819 [cs].
- AI intensifies fight against ‘paper mills’ that churn out fake research. Nature 618(7964), 222–223.
- Assessment reform for the age of artificial intelligence. TEQSA Resource, Tertiary Education Quality and Standards Agency, Melbourne, Australia.
- Are Australian Research Council reports being written by ChatGPT? The Guardian.
- Stable Bias: Analyzing Societal Representations in Diffusion Models. arxiv [cs.CY].
- A critical review of GenAI policies in higher education assessment: a call to reconsider the “originality” of students’ work. Assessment & Evaluation in Higher Education, 1–14.
- Papers and peer reviews with evidence of ChatGPT writing.
- Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with AI? Computers and Education: Artificial Intelligence 3, 100056.
- Electric Sheep? Humans, Robots, Artificial Intelligence, and the Future of Writing. Changing English 28(4), 442–455. Publisher: Routledge.
- Eight ways to engage with AI writers in higher education.
- Microsoft (2024, February). Copilot in Bing: Our approach to Responsible AI.
- Circling Back. Quality Progress 43(11), 22–28.
- Setting time on fire and the temptation of The Button.
- Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts.
- Monaghan, J. (2016). The Calculator Debate. In J. Monaghan, L. Trouche, and J. M. Borwein (Eds.), Tools and Mathematics, pp. 305–331. Cham: Springer International Publishing.
- Can Foundation Models Wrangle Your Data?
- National Institute for Health (2023). The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process.
- Nature Portfolio (2023). Artificial Intelligence (AI).
- NHMRC (2018). Australian Code for the Responsible Conduct of Research, 2018.
- NHMRC (2019). Management of Data and Information in Research: A guide supporting the Australian Code for the Responsible Conduct of Research. Technical Report R41B, Australian Research Council and Universities Australia, Commonwealth of Australia, Canberra.
- How research managers are using AI to get ahead. Nature.
- Strategy for Culture Change.
- Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464), 447–453.
- OpenAI (2022, November). Sharing & publication policy.
- OpenAI (2023a, November). Business terms.
- OpenAI (2023b). How can educators respond to students presenting AI-generated content as their own?
- OpenAI (2023c, November). Terms of use.
- OpenAI Help Center (2024). How your data is used to improve model performance.
- Machines that interact with men: to teach, test, and help to do a job. Aslib Proceedings 15(4), 104–105.
- Rethinking assessment in response to generative artificial intelligence. Medical Education 57(10), 889–891.
- The critique of AI as a foundation for judicious use in higher education. Journal of Applied Learning and Teaching 6(2).
- Generative AI entails a credit–blame asymmetry. Nature Machine Intelligence 5(5), 472–475.
- Is ChatGPT making scientists hyper-productive? The highs and lows of using AI. Nature 627(8002), 16–17.
- The Singapore Statement on Research Integrity. Accountability in Research 18(2), 71–75.
- Google apologizes for “missing the mark” after Gemini generated racially diverse Nazis. The Verge.
- Can AI-Generated Text be Reliably Detected?
- May the force of text data analysis be with you: Unleashing the power of generative AI for social psychology research. Computers in Human Behavior: Artificial Humans 1(2), 100006.
- Generative AI meets copyright. Science 381(6654), 158–161.
- ‘ChatGPT et al.’: The ethics of using (generative) artificial intelligence in research and science. Journal of Information Technology 38(3), 232–238.
- Peer review and the relevance of science. Futures 39(7), 827–845.
- 22nd European Conference on e-Learning: ECEL 2023. Academic Conferences and publishing limited.
- The Great PowerPoint Panic of 2003. Section: Technology.
- ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning and Teaching 6(1), 31–40.
- A survival kit for doctoral students and their supervisors: traveling the landscape of research. Los Angeles: SAGE Publications. OCLC: 917365259.
- The limitations to our understanding of peer review. Research Integrity and Peer Review 5(1), 6.
- TEQSA (2023). Artificial intelligence.
- UNESCO (2021). AI and education: guidance for policy-makers. UNESCO.
- UNESCO (2023). ChatGPT and Artifical Intelligence in higher education.
- Universities UK (2019, October). The Concordat to Support Research Integrity.
- How big is science’s fake-paper problem? Nature 623(7987), 466–467.
- AI and science: what 1,600 researchers think. Nature 621(7980), 672–675.
- Embracing AI for student and staff productivity : An ACODE Whitepaper based on the ACODE 88 Workshop and Roundtables. ASCILITE Publications, 1–8.
- Vinge, V. (2006). Rainbows end (1st ed ed.). New York: Tor.
- Digitization, Digitalization, and Digital Transformation. In N. Meyendorf, N. Ida, R. Singh, and J. Vrana (Eds.), Handbook of Nondestructive Evaluation 4.0, pp. 1–17. Cham: Springer International Publishing.
- WIPO IP and Frontier Technologies Division (2024). Generative AI: Navigating Intellectual Property. Technical report, World Intellectual Property Organisation, Geneva, Switzerland.
- ‘Obviously ChatGPT’ — how reviewers accused me of scientific fraud. Nature.
- World Health Organisation (2017, November). Code of Conduct for responsible Research.
- AI ethics: from principles to practice. AI & SOCIETY 38(6), 2693–2703.
- Shannon Smith (1 paper)
- Melissa Tate (1 paper)
- Keri Freeman (1 paper)
- Anne Walsh (1 paper)
- Brian Ballsun-Stanton (1 paper)
- Mark Hooper (8 papers)
- Murray Lane (1 paper)