R-Diverse: Mitigating Diversity Illusion in Self-Play LLM Training
Gengsheng Li Jinghan He Shijie Wang Dan Zhang Ruiqi Liu Renrui Zhang Zijun Yao Junfeng Fang Haiyun Guo Jinqiao Wang
作者信息
Abstract
Self-play bootstraps LLM reasoning through an iterative Challenger-Solver loop: the Challenger is trained to generate questions that target the Solver's capabilities, and the Solver is optimized on the generated data to expand its reasoning skills. However, existing frameworks like R-Zero often exhibit non-sustained improvement, where early gains degrade as self-play continues. We identify a key failure mode, Diversity Illusion, where the Solver's training signals appear diverse yet collapse into recurring underlying patterns. It manifests as (1) Local Diversity Illusion, where diversity is enforced only within-batch, inducing cross-iteration mode cycling; and (2) Surface Diversity Illusion, where questions vary superficially but require near-identical reasoning skills. To mitigate them, we propose R-Diverse with two aligned innovations: Memory-Augmented Penalty (MAP), which uses a persistent memory bank to discourage recycling across iterations, and Skill-Aware Measurement (SAM), which evaluates diversity by the reasoning skills exercised rather than surface variation of questions. Across 10 math and general reasoning benchmarks, R-Diverse sustains gains over more iterations and consistently outperforms prior self-play methods. Code is available at https://github.com/Gengsheng-Li/R-Diverse.引用本文复制引用
Gengsheng Li,Jinghan He,Shijie Wang,Dan Zhang,Ruiqi Liu,Renrui Zhang,Zijun Yao,Junfeng Fang,Haiyun Guo,Jinqiao Wang.R-Diverse: Mitigating Diversity Illusion in Self-Play LLM Training[EB/OL].(2026-02-16)[2026-02-19].https://arxiv.org/abs/2602.13103.学科分类
计算技术、计算机技术
评论