Towards an Efficient, Customizable, and Accessible AI Tutor (2510.06255v1)
Abstract: The integration of LLMs into education offers significant potential to enhance accessibility and engagement, yet their high computational demands limit usability in low-resource settings, exacerbating educational inequities. To address this, we propose an offline Retrieval-Augmented Generation (RAG) pipeline that pairs a small LLM (SLM) with a robust retrieval mechanism, enabling factual, contextually relevant responses without internet connectivity. We evaluate the efficacy of this pipeline using domain-specific educational content, focusing on biology coursework. Our analysis highlights key challenges: smaller models, such as SmolLM, struggle to effectively leverage extended contexts provided by the RAG pipeline, particularly when noisy or irrelevant chunks are included. To improve performance, we propose exploring advanced chunking techniques, alternative small or quantized versions of larger models, and moving beyond traditional metrics like MMLU to a holistic evaluation framework assessing free-form response. This work demonstrates the feasibility of deploying AI tutors in constrained environments, laying the groundwork for equitable, offline, and device-based educational tools.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.