Quick Red Fox (QRF): Real-Time Interview System
- Quick Red Fox (QRF) is an open-source, server–client framework that facilitates in-situ, data-driven classroom interviews via real-time log triggers.
- It integrates behavioral, affect, and self-regulated learning detectors with a robust architecture including Python, Node.js, Firebase, and Android apps.
- QRF standardizes trigger prioritization and interview workflows, reducing research waste and enhancing efficiency in qualitative educational studies.
The Quick Red Fox (QRF) system is an open-source, server–client framework designed to facilitate Data‐Driven Classroom Interviews (DDCIs)—short, highly targeted qualitative interviews with students triggered at precise moments of theoretical interest during interactions with digital learning environments. Integrating in real time with state-of-the-art student‐modeling technologies, QRF operationalizes a methodology that tightly couples log-based behavior/event detection, automated researcher guidance, and robust data management to optimize in-situ qualitative data capture in classrooms (Ocumpaugh et al., 17 Nov 2025).
1. System Architecture and Core Components
QRF operates on a server–client paradigm designed for low-latency, real-time integration with existing educational software, typically intelligent tutoring systems or educational games. The architecture comprises several interlocking modules:
- Learning-Software Client: The target digital learning environment is instrumented with a lightweight Python polling script that continuously monitors student log data and emits candidate triggers based on custom logic.
- QRF Dispatcher/Server: Implemented in Python and Node.js, the Dispatcher receives triggers through an open WebSocket (PieSocket) or HTTP endpoint, logs them in Firebase Realtime Database (in JSON format), then applies prioritization algorithms to push the highest-priority triggers to connected QRF Android apps.
- Firebase Real-Time Database: Serves as the central persistent storage for triggers, annotations, audio files, and interviewer feedback, supporting both live app interactions and a web-based dashboard for off-floor research monitoring.
- QRF Dashboard: Web-based UI connected via JavaScript SDK for live tracking of trigger events, interview metadata, and status monitoring.
- Android Client App: A Kotlin-based, responsive mobile application that (i) maintains a WebSocket for low-latency trigger delivery, (ii) displays event, user, and priority information, (iii) provides controls for audio recording and feedback, and (iv) enables local storage/export of interview artifacts.
Communication requires robust network connectivity, with optional mobile hotspot or VPN usage to bypass restrictive institutional firewalls [(Ocumpaugh et al., 17 Nov 2025), Section 2.6].
2. Integration with Student-Modeling Technologies
QRF leverages the output streams of behavior detectors, affect detectors, and self-regulated-learning detectors within the learning environment:
- Behavior-Sensing: Examples include block placements in games like Minecraft, tool use frequencies, and navigation patterns.
- Affect-Sensing: Real-time affective state predictions (e.g., BROMP-derived epistemic-emotion states at 20-second intervals in platforms like Betty’s Brain).
- Self-Regulated-Learning Sensing: Detectors utilize sequences of hint requests, deletions, or reflection steps.
A Python polling script, customizable for polling interval and detection logic, continuously retrieves the latest student interaction logs, applies event-detection rules, and sends triggers to the Dispatcher. Feedback and interview metadata cycle through this pipeline, enabling consistent, synchronized, and theory-aligned prompt delivery [(Ocumpaugh et al., 17 Nov 2025), Figure 1.1, 3.1].
3. Trigger Definition and Detection Algorithms
Triggers are formalized using event description templates, variables (e.g., user, count, time window), and associated metadata such as unique trigger IDs, timestamps, and priority labels. Typical trigger templates include "High Block Usage: {user} has placed {x} redstone torches in last {y} s."
Trigger-detection logic encompasses:
- Regular polling (Python pseudocode: loop with sleep and trigger computation after log pull).
- Redundancy throttling (e.g., "No Observations in 20 min" only if not triggered for that user within the same window).
- Eight prioritization guidelines:
- Alignment with core research question.
- Preference for rare over frequent events.
- Recency bias for newer events.
- Short-lived triggers prioritized over persistent ones.
- Staggering to avoid temporal clustering ("burstiness").
- Ensuring cross-student sampling breadth.
- Enforcing per-student cooldown to prevent repeated interruptions.
- Inclusion of a random-event fallback to avoid interviewer downtime.
Dispatcher parameters are tunable, including expiration time (e.g., 3 minutes), cooldown period (e.g., 5 minutes), max enqueues, and random-trigger activation [(Ocumpaugh et al., 17 Nov 2025), Table 3.3].
4. Trigger and Interview Prompt Development
Trigger development is both theory-driven and data-driven. Selection relies on mapping theoretical constructs of interest, such as facilitative/inhibitory emotion cycles, onto loggable behavior, supplemented by correlational analysis of historical data (e.g., block-use count vs. survey subscales). Statistical thresholds (percentiles/confidence intervals) are established using historical logs to ensure trigger frequency is appropriate for researcher bandwidth.
Trigger code is modular, using flag/state management and timestamp checks for efficient retrigger avoidance. Prompts are co-designed to correspond with triggers, beginning with student-centered openers (e.g., "How are you feeling?") and pivoting to event-linked inquiry (e.g., "What were you focusing on when that happened?"). Protocols emphasize an asset-based, non-authoritarian posture ("Big Sister Approach") and require explicit student assent before each interview instance. Skipping or overriding triggers is allowed for cases of recent interview, distress, or technical malfunction [(Ocumpaugh et al., 17 Nov 2025), Tables 4.1–4.2].
5. Interview Fieldwork Process and Workflow
Field deployment follows a rigorous workflow that facilitates both logistical and ethical compliance:
Preparation: Whole-class introduction, demonstration of equipment, parental consent, and student assent checkpoints.
- In-Class Workflow:
- Application transitions from "Connected" to "Ready."
- Trigger details populate event/user/priority data.
- Interviewer approaches student, confirms assent.
- Employs student-centered opener followed by trigger-specific probe.
- Utilizes neutral back-channel communication; interviewer avoids instructional or corrective stances.
- Audio recording lasts approximately 3–5 minutes; interviewer records notes and concludes.
- System transitions to next queued trigger.
- Debrief: Daily sessions include spot-checking audio, reviewing logs against consent and seating charts, evaluating firewall/network performance, and monitoring trigger firing rates for reprioritization [(Ocumpaugh et al., 17 Nov 2025), Chapter 4, Appendix 4].
6. Interview Data Analysis and Storage
Audio and typed notes are exported from Android local storage to secure, FERPA-compliant repositories. Transcription protocols involve a combination of automated and manual (AV-friendly) tools, with a standard human review ratio of 5:1 for automated outputs. Personal identifiers are de-identified, replaced by pseudonyms or "[REDACTED]." Annotation conventions document speech features, including pauses, hesitations, overlap, slang, jargon, and inaudible segments [(Ocumpaugh et al., 17 Nov 2025), Figures 5.1–5.11].
Coding is performed using a codebook developed through deductive (theory-driven) and inductive (empirical) approaches, with inter-rater reliability (IRR) checks utilizing metrics such as Cohen’s κ, Fleiss’ κ, and Krippendorff’s α. Codes below frequency or IRR thresholds are either refined or collapsed into broader categories.
Each interview record is metadata-aligned via studentID and timestamp, enabling linkage to game logs and corresponding pre/post surveys. Analytical options include correlational analysis (e.g., code frequencies with survey/behavioral outcomes), temporal pattern segmentation, ordered network analysis, student-response transition sequence mining, behavioral profiling/clustering, and selective qualitative re-examination for deeper identity/motivation/metacognition insights [(Ocumpaugh et al., 17 Nov 2025), Section 5.8].
7. Significance and Methodological Impact
QRF supports detection and contextualization of "needle-in-a-haystack" or "one-shot" events within ecologically valid classroom settings by combining event-driven quantitative logging with targeted qualitative interviews synchronously anchored to significant student interactions. The methodology reduces research waste, optimizes researcher efficiency via automation and prioritization, and enables rapid iteration on triggers, prompt phrasing, and prioritization logic during ongoing fieldwork.
QRF’s tight integration with learning analytics pipelines, formalization of trigger and prioritization rules, and standardization of interviewer workflows and analytic protocols collectively render large-scale, in-classroom, rapid qualitative data collection feasible, efficient, and scalable (Ocumpaugh et al., 17 Nov 2025).