Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Untangling Critical Interaction with AI in Students Written Assessment (2404.06955v1)

Published 10 Apr 2024 in cs.HC and cs.AI

Abstract: AI has become a ubiquitous part of society, but a key challenge exists in ensuring that humans are equipped with the required critical thinking and AI literacy skills to interact with machines effectively by understanding their capabilities and limitations. These skills are particularly important for learners to develop in the age of generative AI where AI tools can demonstrate complex knowledge and ability previously thought to be uniquely human. To activate effective human-AI partnerships in writing, this paper provides a first step toward conceptualizing the notion of critical learner interaction with AI. Using both theoretical models and empirical data, our preliminary findings suggest a general lack of Deep interaction with AI during the writing process. We believe that the outcomes can lead to better task and tool design in the future for learners to develop deep, critical thinking when interacting with AI.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. John B Biggs. 1987. Student Approaches to Learning and Studying. Research Monograph. Australian Council for Educational Research Ltd., ERIC.
  2. A descriptive model of information problem solving while using internet. Computers & Education 53, 4 (2009), 1207–1217.
  3. Robert Capra and Jaime Arguello. 2023. How does AI chat change search behaviors? arXiv preprint arXiv:2307.03826 (2023).
  4. Empowering Children’s Critical Reflections on AI, Robotics and Other Intelligent Technologies. In Proceedings of the 11th nordic conference on human-computer interaction: shaping experiences, shaping society. 1–4.
  5. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper 24-013 (2023).
  6. Robert A Ellis and Ana-Marie Bliuc. 2019. Exploring new elements of the student approaches to learning framework: The role of online learning technologies in student learning. Active Learning in Higher Education 20, 1 (2019), 11–24.
  7. Linda Flower and John R Hayes. 1981. A cognitive process theory of writing. College composition and communication 32, 4 (1981), 365–387.
  8. How does ChatGPT perform on the United States medical licensing examination? The implications of large language models for medical education and knowledge assessment. JMIR Medical Education 9, 1 (2023), e45312.
  9. Richard HR Harper. 2019. The Role of HCI in the Age of AI. International Journal of Human–Computer Interaction 35, 15 (2019), 1331–1344.
  10. Digital writing technologies in higher education: theory, research, and practice. Springer Nature. 526 pages.
  11. CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. In CHI Conference on Human Factors in Computing Systems. ACM, 1–19. https://doi.org/10.1145/3491102.3502030
  12. Danny Liu and Adam Bridgeman. 2023. ChatGPT is old news: How do we assess in the age of AI writing co-pilots? https://educational-innovation.sydney.edu.au/teaching@sydney/chatgpt-is-old-news-how-do-we-assess-in-the-age-of-ai-writing-co-pilots/. Accessed: 2023-09-21.
  13. Assessment reform for the age of artificial intelligence. (2023).
  14. Duri Long and Brian Magerko. 2020. What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–16.
  15. Paul R Pintrich. 2004. A conceptual framework for assessing motivation and self-regulated learning in college students. Educational psychology review 16 (2004), 385–407.
  16. Partners in cognition: Extending human intelligence with intelligent technologies. Educational researcher 20, 3 (1991), 2–9.
  17. Questioning learning analytics? Cultivating critical engagement as student automated feedback literacy. In LAK22: 12th International Learning Analytics and Knowledge Conference. 326–335. https://doi.org/10.1145/3506860.3506912
  18. Visual Representation of Co-Authorship with GPT-3: Studying Human-Machine Interaction for Effective Writing.. In Proceedings of the 16th International Conference on Educational Data Mining. International Educational Data Mining Society. https://doi.org/10.5281/zenodo.8115695
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Antonette Shibani (3 papers)
  2. Simon Knight (4 papers)
  3. Kirsty Kitto (11 papers)
  4. Ajanie Karunanayake (1 paper)
  5. Simon Buckingham Shum (6 papers)
Citations (3)

Summary

Critical Interaction with AI in Students' Written Assessment

The paper "Untangling Critical Interaction with AI in Students' Written Assessment" examines the essential challenge posed by integrating AI into educational practices, specifically focusing on developing critical interaction skills among students. The authors highlight the necessity for learners, especially in the age of generative AI, to be equipped with critical thinking and AI literacy skills to interact effectively with AI technologies in educational assessments.

Conceptual Framework and Methodology

The paper introduces a novel Critical Interaction with AI for Writing (CIAW) framework, developed by synthesizing three established theoretical models. These models include the Cognitive Process Model of Writing, the IPS-I model for information problem-solving using the Internet, and the Student Approaches to Learning framework. The CIAW framework categorizes student interactions with AI into five dimensions: Critical Interaction for Planning and Ideation, Critical Interaction for Information Seeking and Evaluation, Critical Interaction for Writing and Presentation, Personal Reflection on AI-assisted Learning, and Conversational Engagement. Each dimension is further classified into deep, shallow, or absent levels of engagement.

The paper employs a qualitative content analysis of 49 graduate student assignments from a data science course to evaluate the framework. Students were encouraged to use ChatGPT as a writing support tool, and the analysis focused on their self-reflections and ChatGPT interactions.

Findings

The analysis reveals a predominant trend of shallow engagement with AI tools across most dimensions. Notably, 83.7% of interactions in Planning and Ideation were shallow, while 89.8% of interactions in Information Seeking and Evaluation fell into the same category. Writing and Presentation processes also predominantly involved shallow interactions (18.4%), indicating a tendency among students to limit their use of AI to basic tasks such as rephrasing and formatting.

However, a significant portion of students demonstrated deep engagement in Personal Reflection (49%), suggesting an awareness of the limitations and ethical considerations of using AI tools. This insight points to students' potential to utilize AI responsibly, provided they are guided to think critically about their writing processes.

Implications

The paper has several implications for educational practice and future research. The shallow engagement observed in most dimensions points to an urgent need for rethinking educational curricula and assessment designs. Educators must emphasize AI literacy and critical interaction skills, empowering students to harness the full potential of AI technologies without compromising their learning experience.

Moreover, the findings underscore the necessity of integrating guided reflection into educational settings. Such reflections could help students critically analyze their interactions with AI, fostering deeper learning and understanding.

Future Directions

Future research could leverage learning analytics to automate the evaluation of critical interactions, analyzing metrics such as similarity scores between student submissions and AI outputs, prompting efficiency indicators, and diversity in source referencing. Expanding the dataset across varying disciplines and educational contexts would provide broader insights into AI's role in education.

Overall, the paper initiates an important discussion on the implications of AI integration in education, laying the groundwork for further investigations into optimizing human-AI partnerships in learning environments.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com