Dynamic Sampling that Adapts: Iterative DPO for Self-Aware Mathematical Reasoning
Dynamic Sampling that Adapts: Iterative DPO for Self-Aware Mathematical Reasoning
In the realm of data selection for reasoning tasks, existing approaches predominantly rely on externally predefined static metrics such as difficulty and diversity, which are often designed for supervised fine-tuning (SFT) and lack adaptability to continuous training processes. A critical limitation of these methods is their inability to dynamically align with the evolving capabilities of models during online training, a gap that becomes increasingly pronounced with the rise of dynamic training paradigms and online reinforcement learning (RL) frameworks (e.g., R1 models). To address this, we introduce SAI-DPO, an algorithm that dynamically selects training data by continuously assessing a model's stage-specific reasoning abilities across different training phases. By integrating real-time model performance feedback, SAI-DPO adaptively adapts data selection to the evolving strengths and weaknesses of the model, thus enhancing both data utilization efficiency and final task performance. Extensive experiments on three state-of-the-art models and eight mathematical reasoning benchmarks, including challenging competition-level datasets (e.g., AIME24 and AMC23), demonstrate that SAI-DPO achieves an average performance boost of up to 21.3 percentage points, with particularly notable improvements of 10 and 15 points on AIME24 and AMC23, respectively. These results highlight the superiority of dynamic, model-adaptive data selection over static, externally defined strategies in advancing reasoning.
Jun Rao、Xuebo Liu、Hexuan Deng、Zepeng Lin、Zixiong Yu、Jiansheng Wei、Xiaojun Meng、Min Zhang
计算技术、计算机技术
Jun Rao,Xuebo Liu,Hexuan Deng,Zepeng Lin,Zixiong Yu,Jiansheng Wei,Xiaojun Meng,Min Zhang.Dynamic Sampling that Adapts: Iterative DPO for Self-Aware Mathematical Reasoning[EB/OL].(2025-05-21)[2025-06-06].https://arxiv.org/abs/2505.16176.点此复制
评论