Papers
Topics
Authors
Recent
2000 character limit reached

Interactive Evaluation of Large Language Models for Multi-Requirement Software Engineering Tasks

Published 26 Aug 2025 in cs.AI | (2508.18905v1)

Abstract: Standard single-turn, static benchmarks fall short in evaluating the nuanced capabilities of LLMs on complex tasks such as software engineering. In this work, we propose a novel interactive evaluation framework that assesses LLMs on multi-requirement programming tasks through structured, feedback-driven dialogue. Each task is modeled as a requirement dependency graph, and an interviewer'' LLM, aware of the ground-truth solution, provides minimal, targeted hints to aninterviewee'' model to help correct errors and fulfill target constraints. This dynamic protocol enables fine-grained diagnostic insights into model behavior, uncovering strengths and systematic weaknesses that static benchmarks fail to measure. We build on DevAI, a benchmark of 55 curated programming tasks, by adding ground-truth solutions and evaluating the relevance and utility of interviewer hints through expert annotation. Our results highlight the importance of dynamic evaluation in advancing the development of collaborative code-generating agents.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.