Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automatically Generating CS Learning Materials with Large Language Models (2212.05113v1)

Published 9 Dec 2022 in cs.CY

Abstract: Recent breakthroughs in LLMs, such as GPT-3 and Codex, now enable software developers to generate code based on a natural language prompt. Within computer science education, researchers are exploring the potential for LLMs to generate code explanations and programming assignments using carefully crafted prompts. These advances may enable students to interact with code in new ways while helping instructors scale their learning materials. However, LLMs also introduce new implications for academic integrity, curriculum design, and software engineering careers. This workshop will demonstrate the capabilities of LLMs to help attendees evaluate whether and how LLMs might be integrated into their pedagogy and research. We will also engage attendees in brainstorming to consider how LLMs will impact our field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Stephen MacNeil (37 papers)
  2. Andrew Tran (8 papers)
  3. Juho Leinonen (41 papers)
  4. Paul Denny (67 papers)
  5. Joanne Kim (8 papers)
  6. Arto Hellas (31 papers)
  7. Seth Bernstein (6 papers)
  8. Sami Sarsa (17 papers)
Citations (34)