Automatic deductive coding in discourse analysis: an application of large language models in learning analytics (2410.01240v1)
Abstract: Deductive coding is a common discourse analysis method widely used by learning science and learning analytics researchers for understanding teaching and learning interactions. It often requires researchers to manually label all discourses to be analyzed according to a theoretically guided coding scheme, which is time-consuming and labor-intensive. The emergence of LLMs such as GPT has opened a new avenue for automatic deductive coding to overcome the limitations of traditional deductive coding. To evaluate the usefulness of LLMs in automatic deductive coding, we employed three different classification methods driven by different artificial intelligence technologies, including the traditional text classification method with text feature engineering, BERT-like pretrained LLM and GPT-like pretrained LLM. We applied these methods to two different datasets and explored the potential of GPT and prompt engineering in automatic deductive coding. By analyzing and comparing the accuracy and Kappa values of these three classification methods, we found that GPT with prompt engineering outperformed the other two methods on both datasets with limited number of training samples. By providing detailed prompt structures, the reported work demonstrated how LLMs can be used in the implementation of automatic deductive coding.
- Lishan Zhang (1 paper)
- Han Wu (124 papers)
- Xiaoshan Huang (16 papers)
- Tengfei Duan (1 paper)
- Hanxiang Du (3 papers)