|国家预印本平台
首页|Test-Time Scaling with Repeated Sampling Improves Multilingual Text Generation

Test-Time Scaling with Repeated Sampling Improves Multilingual Text Generation

Test-Time Scaling with Repeated Sampling Improves Multilingual Text Generation

来源:Arxiv_logoArxiv
英文摘要

Inference-time scaling via repeated sampling has shown promise in reasoning tasks, but its effectiveness in multilingual generation remains underexplored. We evaluate this approach using perplexity- and reward-based verifiers on two multilingual benchmarks: the Aya Evaluation Suite and m-ArenaHard. Our results show consistent quality improvements, with gains exceeding 35% in some cases. While perplexity-based scoring is effective for open-ended prompts, only reward-based verifiers improve performance on tasks requiring reasoning (e.g., math, code). Our results demonstrate the broader utility of repeated sampling for multilingual text generation and underscore the importance of selecting right verifiers for the task.

Ashim Gupta、Vivek Srikumar

计算技术、计算机技术

Ashim Gupta,Vivek Srikumar.Test-Time Scaling with Repeated Sampling Improves Multilingual Text Generation[EB/OL].(2025-05-27)[2025-07-16].https://arxiv.org/abs/2505.21941.点此复制

评论