Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging Large Language Models for Concept Graph Recovery and Question Answering in NLP Education (2402.14293v1)

Published 22 Feb 2024 in cs.CL

Abstract: In the domain of NLP, LLMs have demonstrated promise in text-generation tasks. However, their educational applications, particularly for domain-specific queries, remain underexplored. This study investigates LLMs' capabilities in educational scenarios, focusing on concept graph recovery and question-answering (QA). We assess LLMs' zero-shot performance in creating domain-specific concept graphs and introduce TutorQA, a new expert-verified NLP-focused benchmark for scientific graph reasoning and QA. TutorQA consists of five tasks with 500 QA pairs. To tackle TutorQA queries, we present CGLLM, a pipeline integrating concept graphs with LLMs for answering diverse questions. Our results indicate that LLMs' zero-shot concept graph recovery is competitive with supervised methods, showing an average 3% F1 score improvement. In TutorQA tasks, LLMs achieve up to 26% F1 score enhancement. Moreover, human evaluation and analysis show that CGLLM generates answers with more fine-grained concepts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Rui Yang (221 papers)
  2. Boming Yang (10 papers)
  3. Sixun Ouyang (5 papers)
  4. Tianwei She (6 papers)
  5. Aosong Feng (27 papers)
  6. Yuang Jiang (12 papers)
  7. Freddy Lecue (36 papers)
  8. Jinghui Lu (28 papers)
  9. Irene Li (47 papers)
Citations (4)