Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automatic deductive coding in discourse analysis: an application of large language models in learning analytics (2410.01240v1)

Published 2 Oct 2024 in cs.CL and cs.HC

Abstract: Deductive coding is a common discourse analysis method widely used by learning science and learning analytics researchers for understanding teaching and learning interactions. It often requires researchers to manually label all discourses to be analyzed according to a theoretically guided coding scheme, which is time-consuming and labor-intensive. The emergence of LLMs such as GPT has opened a new avenue for automatic deductive coding to overcome the limitations of traditional deductive coding. To evaluate the usefulness of LLMs in automatic deductive coding, we employed three different classification methods driven by different artificial intelligence technologies, including the traditional text classification method with text feature engineering, BERT-like pretrained LLM and GPT-like pretrained LLM. We applied these methods to two different datasets and explored the potential of GPT and prompt engineering in automatic deductive coding. By analyzing and comparing the accuracy and Kappa values of these three classification methods, we found that GPT with prompt engineering outperformed the other two methods on both datasets with limited number of training samples. By providing detailed prompt structures, the reported work demonstrated how LLMs can be used in the implementation of automatic deductive coding.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Lishan Zhang (1 paper)
  2. Han Wu (124 papers)
  3. Xiaoshan Huang (16 papers)
  4. Tengfei Duan (1 paper)
  5. Hanxiang Du (3 papers)