Immersive Code Learning Framework (ICLF)
- ICLF is a scalable Git-based framework that manages and evaluates student programming projects using an automated, reproducible pipeline.
- It leverages a templating system and private student forks to simulate real-world software workflows and secure grading processes.
- ICLF has shown high scalability and positive feedback in both on-campus courses and MOOCs, enhancing transparency and learning outcomes.
The Immersive Code Learning Framework (ICLF) is a scalable, Git-based organizational pipeline designed for managing and evaluating student programming projects in computer science education. The framework enables students to engage with authentic software development workflows by starting with an existing code base and iteratively completing tasks to pass predefined tests. ICLF centralizes instructor management in a hidden parent repository containing canonical solutions and leverages an automated templating system to expose only the necessary scaffolding and visible tests to students. Each student receives a private Git fork for individual work, and grading platforms integrate automated test evaluation, plagiarism detection, and progress tracking. The approach has been successfully deployed over multiple years in on-campus courses and MOOCs, showing high scalability, transparency, and alignment with real-world software engineering practices (Schaus et al., 21 Jan 2026).
1. Architecture and Component Structure
At its core, ICLF implements a Git-centric, multi-repository structure facilitating controlled code dissemination and secure, automated evaluation. The framework's architecture includes four primary components:
- Hidden Parent Repository: Maintained privately by instructors, this repository contains full reference implementations, both public and hidden test suites, datasets, and configuration files. Source files contain annotated delimiters (
// BEGIN STRIP ... // END STRIP) specifying code sections to be omitted from student templates. - Templating System: A command-line tool or GitHub Action ("stripper") processes the parent repository to excise marked solution code, injects method stubs, and excludes hidden tests, producing a template devoid of implementation details.
- Intermediate Public Repository: Serving as the canonical starter code, this repository exposes only the sanitized scaffolding and a subset of public tests. Students fork this repository but do not contribute changes upstream.
- Student Private Forks: For each enrolled student, a private fork is automatically generated, granting collaboration rights exclusively to the student and course staff. This setup isolates individual student progress and enables per-student assessment pipelines.
A diagrammatic summary is as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
+-------------------------+
| Hidden Parent Repo | <-- instructors only
+-------------------------+
|
(strip & CI)|
v
+-------------------------+
| Intermediate Public Repo| <-- student template
+-------------------------+
| ^
fork| |pull
v |
+-------------------------+
| Student Private Forks | <-- isolated workspaces
+-------------------------+
|
push|
v
+-------------------------+
| Grading Platform | <-- CI & feedback
+-------------------------+ |
This multi-stage design enables controlled, incremental disclosure of assignment materials and separation of solution code from student-facing scaffolding (Schaus et al., 21 Jan 2026).
2. Workflow: Project Setup through Evaluation
ICLF operationalizes assignment delivery and assessment via a reproducible, automated pipeline:
A. Initial Project Setup (Instructor)
- Construct parent repository with complete solutions, annotated delimiters, and comprehensive test suites.
- Push to a secure Git hosting platform.
- Trigger a continuous integration (CI) job to (a) verify baseline correctness, (b) apply the "stripper" templating procedure, and (c) update the intermediate template repository .
B. Student Enrollment
- Students register through the grading platform (e.g., INGInious), associating their GitHub account.
- A bot forks per student, assigning collaborator permissions and optionally recording mappings for grading.
C. Student Development Process
- Students clone their private forks .
- They implement the required functions, using visible tests for guidance.
- Local tests are executable with JUnit and JavaGrader annotations for immediate, offline feedback.
- Students commit and push changes.
D. Updating Assignments
Instructors can update the parent repository at any time. The templating and continuous integration pipeline ensures is regenerated, enabling students to merge upstream changes, including new instructions or bugfixes—mirroring real-world "upstream" Git workflows.
E. Grading and Feedback
On every push to , the grading platform:
- Validates synchronization with the current template version.
- Archives sources for subsequent plagiarism checking.
- Replaces visible student tests with the comprehensive instructor test suite.
- Executes tests in a secure, sandboxed environment.
- Returns detailed feedback and scores, which are logged chronologically for ongoing progress assessment.
This pipeline decouples grading from the Git platform, automates assignment evolution, and supports transparent, iterative student-instructor interaction (Schaus et al., 21 Jan 2026).
3. Automated Grading, Plagiarism Detection, and Metrics
Automated evaluation in ICLF is orchestrated through a sequence of continuous integration steps and test injection mechanisms:
Test Management
- Visible tests (annotated with JavaGrader's
@Grade) are integrated in the public template, allowing for local execution and formative feedback. - Hidden tests reside only in the instructor repository and are dynamically injected by the grading pipeline at submission time, preventing pre-release overfitting.
Continuous Integration and Feedback
- Template generation and grading are fully automated with zero manual intervention required after instructor push events.
- Each test is annotated with metadata: weight , (optional) timeouts, and custom feedback messages.
Grading Formula
For a student , task , and tests with associated weights , the grade is:
Plagiarism and Progress Tracking
- All student submissions are archived and batch-processed using code similarity analyzers (JPlag, MOSS), yielding pairwise similarity scores ; assignments exceeding a threshold (e.g., $0.7$) are flagged for manual inspection.
- Progress is recorded as a time series . For a cutoff date : where is the minimum passing score (Schaus et al., 21 Jan 2026).
4. Empirical Deployments and Quantitative Evaluation
ICLF has been tested longitudinally in two principal educational contexts:
Discrete Optimization (On-Campus)
- Cohort: 100 students/year, 8 programming mini-projects per offering.
- Average: 12 commits/student/task; 3:1 ratio of local commits to grading submissions.
- Temporal commit clustering: pronounced peaks 12–24 hours before deadlines.
edX MOOC: Constraint Programming
- Scale: 300 students/year, 10 assignments, self-paced with weekly recommendations.
- Commit activity: maximum on Fridays (university scheduled practice) and during the final week.
- Plagiarism: 2% of submissions flagged, all resolved via manual auditing.
- Student survey (): 92% rated immediate feedback highly beneficial; 88% indicated increased preparedness for collaborative workflows.
Platform Throughput and Reliability
- Each offering: 20,000 continuous integration runs, all under 3 minutes/run.
- No recorded inconsistencies between locally visible and grader-side hidden test results.
- Batch plagiarism processing: 300 repositories completed in 5 minutes. This suggests strong scalability and robustness in deployment.
5. Strengths, Shortcomings, and Prospective Enhancements
Advantages
- Fidelity to real-world Git workflows (forking, pulling, merging, resolving conflicts).
- Fully automated, reproducible template and assignment updates—including CI-driven dissemination of changes.
- Rich, immediate student feedback via locally executable public tests, and rigorous, automated grading with additional hidden tests.
- Built-in mechanisms for progress tracking and plagiarism analytics.
Constraints
- Prerequisite proficiency with Git for all students; may necessitate preparatory training.
- Technical reliance on cloud platform REST APIs and CI infrastructure (e.g., GitHub Actions).
- The default JavaGrader integration restricts current support to JVM/JUnit environments; porting to other language/test ecosystems requires additional effort.
- Large numbers of forks may cause management overhead or organization clutter if not carefully administered.
Extension Possibilities
- Extension to additional languages (Python/pytest, C++/Catch2) using corresponding template and grader tools.
- Group project support via multi-collaborator forks and per-team grading.
- Integration of peer review post-auto-grading.
- On-the-fly test instance generation in the style of online judges for robustness hardening.
- Analytics dashboards providing real-time insights on commit behavior and task-level performance. A plausible implication is further reductions in grader workload and enhanced student engagement as analytic features mature.
6. Significance and Adoption Context
ICLF addresses scaling, transparency, and realism in programming project management, streamlining course operations for both large-enrollment MOOCs and traditional classroom settings. By closely modeling professional software engineering practices, the framework bridges theoretical instruction with industrial paradigms. Its architecture enables incremental updates and iterative assessment without disrupting student workflow, and the empirical results indicate high scalability and positive student reception. Its principles suggest applicability to broader CS education contexts seeking reproducible, evolvable, and transparent instructional pipelines (Schaus et al., 21 Jan 2026).