|国家预印本平台
首页|Principled Foundations for Preference Optimization

Principled Foundations for Preference Optimization

Principled Foundations for Preference Optimization

来源:Arxiv_logoArxiv
英文摘要

In this paper, we show that direct preference optimization (DPO) is a very specific form of a connection between two major theories in the ML context of learning from preferences: loss functions (Savage) and stochastic choice (Doignon-Falmagne and Machina). The connection is established for all of Savage's losses and at this level of generality, (i) it includes support for abstention on the choice theory side, (ii) it includes support for non-convex objectives on the ML side, and (iii) it allows to frame for free some notable extensions of the DPO setting, including margins and corrections for length. Getting to understand how DPO operates from a general principled perspective is crucial because of the huge and diverse application landscape of models, because of the current momentum around DPO, but also -- and importantly -- because many state of the art variations on DPO definitely occupy a small region of the map that we cover. It also helps to understand the pitfalls of departing from this map, and figure out workarounds.

Wenxuan Zhou、Shujian Zhang、Brice Magdalou、John Lambert、Ehsan Amid、Richard Nock、Andrew Hard

计算技术、计算机技术

Wenxuan Zhou,Shujian Zhang,Brice Magdalou,John Lambert,Ehsan Amid,Richard Nock,Andrew Hard.Principled Foundations for Preference Optimization[EB/OL].(2025-07-10)[2025-07-21].https://arxiv.org/abs/2507.07855.点此复制

评论