Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Teach AI How to Code: Using Large Language Models as Teachable Agents for Programming Education (2309.14534v3)

Published 25 Sep 2023 in cs.HC

Abstract: This work investigates LLMs as teachable agents for learning by teaching (LBT). LBT with teachable agents helps learners identify knowledge gaps and discover new knowledge. However, teachable agents require expensive programming of subject-specific knowledge. While LLMs as teachable agents can reduce the cost, LLMs' expansive knowledge as tutees discourages learners from teaching. We propose a prompting pipeline that restrains LLMs' knowledge and makes them initiate "why" and "how" questions for effective knowledge-building. We combined these techniques into TeachYou, an LBT environment for algorithm learning, and AlgoBo, an LLM-based tutee chatbot that can simulate misconceptions and unawareness prescribed in its knowledge state. Our technical evaluation confirmed that our prompting pipeline can effectively configure AlgoBo's problem-solving performance. Through a between-subject study with 40 algorithm novices, we also observed that AlgoBo's questions led to knowledge-dense conversations (effect size=0.71). Lastly, we discuss design implications, cost-efficiency, and personalization of LLM-based teachable agents.

Overview of TeachYou: Leveraging LLMs in Programming Education

The paper "Teach AI How to Code: Using LLMs as Teachable Agents for Programming Education" is a comprehensive paper on utilizing LLMs as teachable agents in Learning by Teaching (LBT) scenarios, specifically focusing on algorithm learning. This research aims to explore the potential of LLMs to facilitate the teaching process by acting as virtual students, termed "AlgoBo," with the system "TeachYou" providing a platform for interactive teaching and learning activities.

LLMs have demonstrated remarkable proficiency in generating contextual dialogues, role mimicry, and learning from demonstrations. This paper proposes employing LLMs to simulate tutee behavior, thereby reducing the operational cost and expanding the scalability of teachable agents in educational environments. It focuses on overcoming challenges related to LLMs' extensive pre-trained knowledge, which can inadvertently discourage learners from engaging deeply in teaching tasks.

Technical Approach

The paper introduces the "Reflect-Respond" prompting pipeline designed to simulate cognitive learning behaviors in LLM-based agents. This pipeline is crucial for managing AlgoBo's knowledge state, enabling it to exhibit restricted knowledge levels and mimic learning progression through conversation.

  1. Reconfigurability: The pipeline allows for precise control over AlgoBo’s knowledge state, where educators can set particular misconceptions and prevent the LLM from self-correcting until taught by a human learner.
  2. Persistence: AlgoBo is engineered to maintain knowledge states consistently over the course of a conversation, ensuring that learners experience a realistic tutoring session. The system is carefully controlled so that unprocessed information from casual dialogues does not alter its intended knowledge level.
  3. Adaptability: On receiving correct tutoring, AlgoBo updates its knowledge state accurately by incorporating new knowledge into its responses, demonstrating learning progression.

Empirical Findings

The technical evaluation conducted confirmed that the pipeline could effectively configure AlgoBo's knowledge state across various topics such as binary search, merge sort, and breadth-first search, achieving the desired attributes of reconfigurability, persistence, and adaptability.

In user studies, the integration of Mode-shifting—where AlgoBo alternatively assumes roles between a help receiver and a questioner—demonstrated significant improvements in knowledge-building density within LBT dialogues. This feature encouraged more engaging interactions by prompting learners to self-explain and elaborate on teaching points, corroborated by a noticeable increase in constructive conversation threads.

Practical Implications and Future Directions

The employment of LLMs in educational settings shows promising scalability and decreases the costs and barriers traditionally associated with teachable agents. By providing learners the ability to engage with a configurable LLM, the system offers a pedagogically effective platform for algorithm education that can adjust to different knowledge levels and misconceptions.

However, the paper highlights the need to balance learner expectations and system capabilities. Future work is suggested to incorporate direct control by learners over the cognitive behaviors of LLM-based agents, such as customizing the granularity of knowledge states and interaction frequency. These enhancements could offer personalized and optimized learning experiences.

Moreover, the paper acknowledges potential expansion beyond algorithm learning, suggesting that applying similar methodologies across other domains like mathematics or science could yield similar improvements in knowledge acquisition and engagement.

In summary, this research illustrates a strategic pivot toward the integration of AI in education. The TeachYou platform, powered by AlgoBo, exemplifies how LLMs, when correctly harnessed, can become not just repositories of vast information but active participants in interactive learning processes, transforming the educational landscape.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hyoungwook Jin (4 papers)
  2. Seonghee Lee (3 papers)
  3. Hyungyu Shin (4 papers)
  4. Juho Kim (56 papers)
Citations (24)