Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLM-in-the-loop: Leveraging Large Language Model for Thematic Analysis (2310.15100v1)

Published 23 Oct 2023 in cs.CL

Abstract: Thematic analysis (TA) has been widely used for analyzing qualitative data in many disciplines and fields. To ensure reliable analysis, the same piece of data is typically assigned to at least two human coders. Moreover, to produce meaningful and useful analysis, human coders develop and deepen their data interpretation and coding over multiple iterations, making TA labor-intensive and time-consuming. Recently the emerging field of LLMs research has shown that LLMs have the potential replicate human-like behavior in various tasks: in particular, LLMs outperform crowd workers on text-annotation tasks, suggesting an opportunity to leverage LLMs on TA. We propose a human-LLM collaboration framework (i.e., LLM-in-the-loop) to conduct TA with in-context learning (ICL). This framework provides the prompt to frame discussions with a LLM (e.g., GPT-3.5) to generate the final codebook for TA. We demonstrate the utility of this framework using survey datasets on the aspects of the music listening experience and the usage of a password manager. Results of the two case studies show that the proposed framework yields similar coding quality to that of human coders but reduces TA's labor and time demands.

Leveraging LLMs for Thematic Analysis: A Human-LLM Collaboration Framework

The paper "LLM-in-the-loop: Leveraging LLM for Thematic Analysis" explores the integration of LLMs in thematic analysis (TA) to potentially streamline the traditionally labor-intensive and iterative process. The effectiveness of this framework is evaluated by conducting TA through a collaborative approach involving both human coders (HC) and machine coders (MC), with an emphasis on employing techniques such as in-context learning (ICL) to facilitate the coding process.

Key Contributions and Methodology

The authors propose a human-LLM collaboration framework that aims to replicate and enhance the coding process typical in thematic analysis. Notable contributions of the paper include:

  1. Development of an LLM-in-the-loop framework, which involves iterations between human coders and an LLM (specifically GPT-3.5) to generate initial codes and refine them into a cohesive codebook that encodes qualitative data.
  2. Introduction of specific prompting techniques that aid the LLM in generating meaningful codes and themes, mitigating the redundancy and enhancing the reliability of the thematic analysis.
  3. Evaluation using two datasets: one surveys aspects of the music listening experience, and another explores the usage of password managers, showcasing the framework's versatility and efficacy compared to traditional human coders.

Results

The human-LLM collaboration approach resulted in coding quality comparable to solely human coders, as evidenced by Cohen's κ\kappa results indicating substantial agreement. The human-coder-Machine-coder (HC+MC) pairing showcased almost perfect agreement in two case studies, surpassing the human-only coder results in terms of efficiency without significant compromise on accuracy. Particularly notable is the framework’s capability to effectively generate codebooks using partial data, addressing LLM input size limitations.

Implications and Future Directions

This paper makes a substantive case for leveraging LLMs in thematic analysis, providing a proof of concept for integrating AI systems into qualitative research domains. The proposed collaboration framework demonstrates significant time and labor savings, suggesting that researchers could reallocate resources to other complex aspects of qualitative research while maintaining high standards in data analysis. Moreover, this underscores AI's potential role in automating repetitive tasks across various disciplines.

Future research could focus on refining prompt strategies to further enhance the performance of LLMs in thematic analysis. Additionally, exploring the application of alternative LLMs to verify reproducibility and performance consistency across platforms is suggested. Addressing ethical concerns and thematic ambiguity through discrepancy discussion mechanisms remains an area for further exploration, which could enhance interpretative clarity and robustness of thematic analysis involving LLMs.

In conclusion, while there remain limitations concerning model optimization, application, and data sensitivity, the findings support the viability of using LLMs to facilitate thematic analysis, marking a progressive step towards integrating artificial intelligence into the qualitative domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Shih-Chieh Dai (5 papers)
  2. Aiping Xiong (8 papers)
  3. Lun-Wei Ku (35 papers)
Citations (32)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com