Model Performance-Guided Evaluation Data Selection for Effective Prompt Optimization
Model Performance-Guided Evaluation Data Selection for Effective Prompt Optimization
Optimizing Large Language Model (LLM) performance requires well-crafted prompts, but manual prompt engineering is labor-intensive and often ineffective. Automated prompt optimization techniques address this challenge but the majority of them rely on randomly selected evaluation subsets, which fail to represent the full dataset, leading to unreliable evaluations and suboptimal prompts. Existing coreset selection methods, designed for LLM benchmarking, are unsuitable for prompt optimization due to challenges in clustering similar samples, high data collection costs, and the unavailability of performance data for new or private datasets. To overcome these issues, we propose IPOMP, an Iterative evaluation data selection for effective Prompt Optimization using real-time Model Performance. IPOMP is a two-stage approach that selects representative and diverse samples using semantic clustering and boundary analysis, followed by iterative refinement with real-time model performance data to replace redundant samples. Evaluations on the BIG-bench dataset show that IPOMP improves effectiveness by 1.6% to 5.3% and stability by at least 57% compared with SOTA baselines, with minimal computational overhead below 1%. Furthermore, the results demonstrate that our real-time performance-guided refinement approach can be universally applied to enhance existing coreset selection methods.
Ximing Dong、Shaowei Wang、Dayi Lin、Ahmed E. Hassan
计算技术、计算机技术
Ximing Dong,Shaowei Wang,Dayi Lin,Ahmed E. Hassan.Model Performance-Guided Evaluation Data Selection for Effective Prompt Optimization[EB/OL].(2025-05-15)[2025-06-06].https://arxiv.org/abs/2505.10736.点此复制
评论