Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding Learner-LLM Chatbot Interactions and the Impact of Prompting Guidelines (2504.07840v2)

Published 10 Apr 2025 in cs.HC, cs.AI, and cs.CL

Abstract: LLMs have transformed human-computer interaction by enabling natural language-based communication with AI-powered chatbots. These models are designed to be intuitive and user-friendly, allowing users to articulate requests with minimal effort. However, despite their accessibility, studies reveal that users often struggle with effective prompting, resulting in inefficient responses. Existing research has highlighted both the limitations of LLMs in interpreting vague or poorly structured prompts and the difficulties users face in crafting precise queries. This study investigates learner-AI interactions through an educational experiment in which participants receive structured guidance on effective prompting. We introduce and compare three types of prompting guidelines: a task-specific framework developed through a structured methodology and two baseline approaches. To assess user behavior and prompting efficacy, we analyze a dataset of 642 interactions from 107 users. Using Von NeuMidas, an extended pragmatic annotation schema for LLM interaction analysis, we categorize common prompting errors and identify recurring behavioral patterns. We then evaluate the impact of different guidelines by examining changes in user behavior, adherence to prompting strategies, and the overall quality of AI-generated responses. Our findings provide a deeper understanding of how users engage with LLMs and the role of structured prompting guidance in enhancing AI-assisted communication. By comparing different instructional frameworks, we offer insights into more effective approaches for improving user competency in AI interactions, with implications for AI literacy, chatbot usability, and the design of more responsive AI systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Cansu Koyuturk (7 papers)
  2. Emily Theophilou (5 papers)
  3. Sabrina Patania (5 papers)
  4. Gregor Donabauer (12 papers)
  5. Andrea Martinenghi (1 paper)
  6. Chiara Antico (2 papers)
  7. Alessia Telari (4 papers)
  8. Alessia Testa (4 papers)
  9. Sathya Bursic (5 papers)
  10. Franca Garzotto (9 papers)
  11. Davinia Hernandez-Leo (6 papers)
  12. Udo Kruschwitz (24 papers)
  13. Davide Taibi (65 papers)
  14. Simona Amenta (1 paper)
  15. Martin Ruskov (10 papers)
  16. Dimitri Ognibene (27 papers)