|国家预印本平台
| 注册
首页|Interactive Evaluation of Large Language Models for Multi-Requirement Software Engineering Tasks

Interactive Evaluation of Large Language Models for Multi-Requirement Software Engineering Tasks

Interactive Evaluation of Large Language Models for Multi-Requirement Software Engineering Tasks

来源:Arxiv_logoArxiv
英文摘要

Standard single-turn, static benchmarks fall short in evaluating the nuanced capabilities of Large Language Models (LLMs) on complex tasks such as software engineering. In this work, we propose a novel interactive evaluation framework that assesses LLMs on multi-requirement programming tasks through structured, feedback-driven dialogue. Each task is modeled as a requirement dependency graph, and an ``interviewer'' LLM, aware of the ground-truth solution, provides minimal, targeted hints to an ``interviewee'' model to help correct errors and fulfill target constraints. This dynamic protocol enables fine-grained diagnostic insights into model behavior, uncovering strengths and systematic weaknesses that static benchmarks fail to measure. We build on DevAI, a benchmark of 55 curated programming tasks, by adding ground-truth solutions and evaluating the relevance and utility of interviewer hints through expert annotation. Our results highlight the importance of dynamic evaluation in advancing the development of collaborative code-generating agents.

Dimitrios Rontogiannis、Maxime Peyrard、Nicolas Baldwin、Martin Josifoski、Robert West、Dimitrios Gunopulos

计算技术、计算机技术

Dimitrios Rontogiannis,Maxime Peyrard,Nicolas Baldwin,Martin Josifoski,Robert West,Dimitrios Gunopulos.Interactive Evaluation of Large Language Models for Multi-Requirement Software Engineering Tasks[EB/OL].(2025-08-26)[2025-09-09].https://arxiv.org/abs/2508.18905.点此复制

评论