Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges (2401.08664v3)

Published 27 Dec 2023 in cs.AI and cs.CL

Abstract: Online education platforms, leveraging the internet to distribute education resources, seek to provide convenient education but often fall short in real-time communication with students. They often struggle to address the diverse obstacles students encounter throughout their learning journey. Solving the problems encountered by students poses a significant challenge for traditional deep learning models, as it requires not only a broad spectrum of subject knowledge but also the ability to understand what constitutes a student's individual difficulties. It's challenging for traditional machine learning models, as they lack the capacity to comprehend students' personalized needs. Recently, the emergence of LLMs offers the possibility for resolving this issue by comprehending individual requests. Although LLMs have been successful in various fields, creating an LLM-based education system is still challenging for the wide range of educational skills required. This paper reviews the recently emerged LLM research related to educational capabilities, including mathematics, writing, programming, reasoning, and knowledge-based question answering, with the aim to explore their potential in constructing the next-generation intelligent education system. Specifically, for each capability, we focus on investigating two aspects. Firstly, we examine the current state of LLMs regarding this capability: how advanced they have become, whether they surpass human abilities, and what deficiencies might exist. Secondly, we evaluate whether the development methods for LLMs in this area are generalizable, that is, whether these methods can be applied to construct a comprehensive educational supermodel with strengths across various capabilities, rather than being effective in only a singular aspect.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Qingyao Li (8 papers)
  2. Lingyue Fu (8 papers)
  3. Weiming Zhang (135 papers)
  4. Xianyu Chen (14 papers)
  5. Jingwei Yu (3 papers)
  6. Wei Xia (147 papers)
  7. Weinan Zhang (322 papers)
  8. Ruiming Tang (171 papers)
  9. Yong Yu (219 papers)
Citations (11)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets