Sem-DPO: Mitigating Semantic Inconsistency in Preference Optimization for Prompt Engineering
Sem-DPO: Mitigating Semantic Inconsistency in Preference Optimization for Prompt Engineering
Generative AI can now synthesize strikingly realistic images from text, yet output quality remains highly sensitive to how prompts are phrased. Direct Preference Optimization (DPO) offers a lightweight, off-policy alternative to RL for automatic prompt engineering, but its token-level regularization leaves semantic inconsistency unchecked as prompts that win higher preference scores can still drift away from the user's intended meaning. We introduce Sem-DPO, a variant of DPO that preserves semantic consistency yet retains its simplicity and efficiency. Sem-DPO adjusts the DPO loss using a weight based on how different the winning prompt is from the original, reducing the impact of training examples that are semantically misaligned. We provide the first analytical bound on semantic drift for preference-tuned prompt generators, showing that Sem-DPO keeps learned prompts within a provably bounded neighborhood of the original text. On three standard text-to-image prompt-optimization benchmarks and two language models, Sem-DPO achieves 8-12% higher CLIP similarity and 5-9% higher human-preference scores (HPSv2.1, PickScore) than DPO, while also outperforming state-of-the-art baselines. These findings suggest that strong flat baselines augmented with semantic weighting should become the new standard for prompt-optimization studies and lay the groundwork for broader, semantics-aware preference optimization in language models.
Anas Mohamed、Azal Ahmad Khan、Xinran Wang、Ahmad Faraz Khan、Shuwen Ge、Saman Bahzad Khan、Ayaan Ahmad、Ali Anwar
计算技术、计算机技术
Anas Mohamed,Azal Ahmad Khan,Xinran Wang,Ahmad Faraz Khan,Shuwen Ge,Saman Bahzad Khan,Ayaan Ahmad,Ali Anwar.Sem-DPO: Mitigating Semantic Inconsistency in Preference Optimization for Prompt Engineering[EB/OL].(2025-07-29)[2025-08-10].https://arxiv.org/abs/2507.20133.点此复制
评论