Omni-Thinker: Scaling Cross-Domain Generalization in LLMs via Multi-Task RL with Hybrid Rewards
Omni-Thinker: Scaling Cross-Domain Generalization in LLMs via Multi-Task RL with Hybrid Rewards
The advancement of general-purpose artificial intelligence relies on large language models (LLMs) that excel across a wide range of tasks, from structured reasoning to creative generation. However, post-training methods like Supervised Fine-Tuning (SFT) often struggle with generalization, favoring memorization over transferable learning. In this work, we introduce Omni-Thinker, a unified reinforcement learning (RL) framework that enhances LLM performance across diverse tasks by combining rule-based verifiable rewards with generative preference signals via LLM-as-a-Judge evaluations. Our approach enables consistent optimization across task types and scales RL-based training to subjective domains. We further investigate training strategies, demonstrating that a curriculum-based progression that orders tasks from structured to open-ended improves performance and reduces forgetting. Experimental results across four domains reveal that curriculum learning improves performance by 5.2% over joint training and 9.1% over model merging. These results highlight the importance of task-aware sampling and hybrid supervision in scaling RL-based post-training for general-purpose LLMs.
Yu Luo、Feng Wen、Jianye Hao、Mark Coates、Yingxue Zhang、Derek Li、Jiaming Zhou、Amirreza Kazemi、Qianyi Sun、Abbas Ghaddar、Mohammad Ali Alomrani、Liheng Ma、Dong Li
计算技术、计算机技术
Yu Luo,Feng Wen,Jianye Hao,Mark Coates,Yingxue Zhang,Derek Li,Jiaming Zhou,Amirreza Kazemi,Qianyi Sun,Abbas Ghaddar,Mohammad Ali Alomrani,Liheng Ma,Dong Li.Omni-Thinker: Scaling Cross-Domain Generalization in LLMs via Multi-Task RL with Hybrid Rewards[EB/OL].(2025-07-24)[2025-08-10].https://arxiv.org/abs/2507.14783.点此复制
评论