|国家预印本平台
首页|J1: Exploring Simple Test-Time Scaling for LLM-as-a-Judge

J1: Exploring Simple Test-Time Scaling for LLM-as-a-Judge

J1: Exploring Simple Test-Time Scaling for LLM-as-a-Judge

来源:Arxiv_logoArxiv
英文摘要

The current focus of AI research is shifting from emphasizing model training towards enhancing evaluation quality, a transition that is crucial for driving further advancements in AI systems. Traditional evaluation methods typically rely on reward models assigning scalar preference scores to outputs. Although effective, such approaches lack interpretability, leaving users often uncertain about why a reward model rates a particular response as high or low. The advent of LLM-as-a-Judge provides a more scalable and interpretable method of supervision, offering insights into the decision-making process. Moreover, with the emergence of large reasoning models, which consume more tokens for deeper thinking and answer refinement, scaling test-time computation in the LLM-as-a-Judge paradigm presents an avenue for further boosting performance and providing more interpretability through reasoning traces. In this paper, we introduce $\textbf{J1-7B}$, which is first supervised fine-tuned on reflection-enhanced datasets collected via rejection-sampling and subsequently trained using Reinforcement Learning (RL) with verifiable rewards. At inference time, we apply Simple Test-Time Scaling (STTS) strategies for additional performance improvement. Experimental results demonstrate that $\textbf{J1-7B}$ surpasses the previous state-of-the-art LLM-as-a-Judge by $ \textbf{4.8}$\% and exhibits a $ \textbf{5.1}$\% stronger scaling trend under STTS. Additionally, we present three key findings: (1) Existing LLM-as-a-Judge does not inherently exhibit such scaling trend. (2) Model simply fine-tuned on reflection-enhanced datasets continues to demonstrate similarly weak scaling behavior. (3) Significant scaling trend emerges primarily during the RL phase, suggesting that effective STTS capability is acquired predominantly through RL training.

Chi-Min Chan、Chunpu Xu、Jiaming Ji、Zhen Ye、Pengcheng Wen、Chunyang Jiang、Yaodong Yang、Wei Xue、Sirui Han、Yike Guo

计算技术、计算机技术

Chi-Min Chan,Chunpu Xu,Jiaming Ji,Zhen Ye,Pengcheng Wen,Chunyang Jiang,Yaodong Yang,Wei Xue,Sirui Han,Yike Guo.J1: Exploring Simple Test-Time Scaling for LLM-as-a-Judge[EB/OL].(2025-05-17)[2025-07-09].https://arxiv.org/abs/2505.11875.点此复制

评论