Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 82 tok/s
Gemini 2.5 Pro 45 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 110 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 469 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Ask Good Questions for Large Language Models (2508.14025v1)

Published 19 Aug 2025 in cs.CL and cs.AI

Abstract: Recent advances in LLMs have significantly improved the performance of dialog systems, yet current approaches often fail to provide accurate guidance of topic due to their inability to discern user confusion in related concepts. To address this, we introduce the Ask-Good-Question (AGQ) framework, which features an improved Concept-Enhanced Item Response Theory (CEIRT) model to better identify users' knowledge levels. Our contributions include applying the CEIRT model along with LLMs to directly generate guiding questions based on the inspiring text, greatly improving information retrieval efficiency during the question & answer process. Through comparisons with other baseline methods, our approach outperforms by significantly enhencing the users' information retrieval experiences.

Summary

  • The paper's main contribution is the AGQ framework, which uses an enhanced CEIRT model to dynamically assess users' knowledge and generate targeted questions.
  • It leverages adaptive inspiring text to fill conceptual gaps, offering precise guiding questions to improve information retrieval efficiency.
  • Experimental results demonstrate that AGQ outperforms baseline methods in dialog accuracy and user knowledge acquisition across various LLM configurations.

Ask Good Questions for LLMs

Introduction

The paper addresses the limitations of current LLM-driven dialog systems when it comes to effective question-generation during information retrieval tasks. These models often struggle to identify users' knowledge deficiencies and generate guidance questions accordingly. The Ask-Good-Question (AGQ) framework is introduced, leveraging an enhanced Concept-Enhanced Item Response Theory (CEIRT) model to more accurately gauge users' knowledge states and dynamically generate targeted questions. This approach aims to improve information retrieval efficiency and the overall user experience by crafting more contextually relevant questions. Figure 1

Figure 1: The diagram shows a single cycle of the Ask-Good-Question (AGQ) framework: processing user-LLM interactions, dynamically updating knowledge state vectors (θ\boldsymbol{\theta}), and using discrimination (a\boldsymbol{a}) and difficulty (b\boldsymbol{b}) parameters to filter inspiring texts for generating guiding questions that enhance information retrieval.

Methodology

Concept-Enhanced Item Response Theory (CEIRT) Model

The AGQ framework employs the CEIRT model which extends the 2-PL model typically used in psychometrics by introducing vector representations for the user's knowledge state (θ\boldsymbol{\theta}), item difficulty (b\boldsymbol{b}), and item discrimination (a\boldsymbol{a}). This allows for a dynamic and rigorous assessment of users' conceptual understanding. The core equation determining the probability of a correct user response is:

pi=11+exp(j(aiθjbi))p_i = \frac{1}{1 + \exp(-\sum_j(a_i\theta_j - b_i))}

This captures how various conceptual abilities contribute to answering questions correctly, modifying the user's understanding during interactions. Figure 2

Figure 2

Figure 2

Figure 2: Evolution of user knowledge states (θ\boldsymbol{\theta}) across five key concepts using different guiding question generation methods.

Inspiring Text and Adaptive Question Generation

To tailor questions effectively, the AGQ uses 'Inspiring Text' to contextualize guidance. The framework employs a scoring function S(t,j)S(t,j) which measures the suitability of text tt for a user with a knowledge state θj\theta_j. This is calculated using:

S(t,j)=exp((θjbi1)2)S(t,j) = \exp(-(|{\theta_j - b_i}|-1)^2)

Aiming for a moderate challenge, this feature enables focusing on the conceptual gaps where users will benefit most from guidance.

Algorithm

Algorithmically, AGQ processes user inputs to determine interaction outcomes and updates knowledge states via the CEIRT model. The framework involves filtering text to find the most suitable passages for user queries, leveraging LLMs for generating variants of instructional prompts—conditioned on knowledge state thresholds—resulting in targeted inquiry questions for the user.

Algorithm Steps:

  1. User submits a query, which the model attempts to resolve, updating the user's knowledge estimation.
  2. The model identifies text suitable for bridging specific knowledge gaps.
  3. It generates targeted guiding questions, initially focusing on foundational knowledge (PQGlowP_{QGlow}) and transitioning to more complex, application-oriented questions (PQGhighP_{QGhigh}) as users advance. Figure 3

    Figure 3: This diagram illustrates the EOR-QA generation process using hydrocarbon miscible flooding as an example, including the extraction of concepts, the construction of a Concepts-Context Dictionary, and the subsequent generation and quality control of guiding questions.

Experimental Results

Performance Comparison

In evaluations against baselines, AGQ notably outperformed existing methods such as Zero-shot generation and Chain-of-Thought prompts. These metrics measured increases in user retrieval accuracy and progressive knowledge acquisition across interaction rounds. Figure 4

Figure 4: Accuracy comparison of different guiding question generation methods over dialogue rounds. The AGQ method demonstrates performance significantly exceeding CoT and Zero-shot approaches, closely approaching the effectiveness of Human Experts.

Cross-Model Adaptability

AGQ's framework was tested across various LLMs, showing robustness and adaptability in different configurations. The method's efficacy did not significantly diminish between model scales, suggesting strong generalizability. Figure 5

Figure 5: Accuracy comparison of guiding question generation using the AGQ framework with different LLMs (ChatGLM4-9B, Qwen2.5-7B, Qwen2.5-32B) and Human Expert over dialogue rounds.

Conclusion

AGQ introduces an effective approach for QA generation in IR settings by integrating CEIRT into LLM workflows for dynamic, user-specific question crafting. This framework holds promise across domains requiring tailored instructional guidance, such as education and specialized industry training. While initial demonstrations focused on resource-intensive tasks like enhanced oil recovery, future research may extend its applications to other complex, knowledge-driven environments.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube