Fundamental Limits of Game-Theoretic LLM Alignment: Smith Consistency and Preference Matching
Fundamental Limits of Game-Theoretic LLM Alignment: Smith Consistency and Preference Matching
Nash Learning from Human Feedback is a game-theoretic framework for aligning large language models (LLMs) with human preferences by modeling learning as a two-player zero-sum game. However, using raw preference as the payoff in the game highly limits the potential of the game-theoretic LLM alignment framework. In this paper, we systematically study using what choices of payoff based on the pairwise human preferences can yield desirable alignment properties. We establish necessary and sufficient conditions for Condorcet consistency, diversity through mixed strategies, and Smith consistency. These results provide a theoretical foundation for the robustness of game-theoretic LLM alignment. Further, we show the impossibility of preference matching -- i.e., no smooth and learnable mappings of pairwise preferences can guarantee a unique Nash equilibrium that matches a target policy, even under standard assumptions like the Bradley-Terry-Luce model. This result highlights the fundamental limitation of game-theoretic LLM alignment.
Zhekun Shi、Kaizhao Liu、Qi Long、Weijie J. Su、Jiancong Xiao
计算技术、计算机技术
Zhekun Shi,Kaizhao Liu,Qi Long,Weijie J. Su,Jiancong Xiao.Fundamental Limits of Game-Theoretic LLM Alignment: Smith Consistency and Preference Matching[EB/OL].(2025-05-26)[2025-06-07].https://arxiv.org/abs/2505.20627.点此复制
评论