|国家预印本平台
首页|RepliBench: Evaluating the Autonomous Replication Capabilities of Language Model Agents

RepliBench: Evaluating the Autonomous Replication Capabilities of Language Model Agents

RepliBench: Evaluating the Autonomous Replication Capabilities of Language Model Agents

来源:Arxiv_logoArxiv
英文摘要

Uncontrollable autonomous replication of language model agents poses a critical safety risk. To better understand this risk, we introduce RepliBench, a suite of evaluations designed to measure autonomous replication capabilities. RepliBench is derived from a decomposition of these capabilities covering four core domains: obtaining resources, exfiltrating model weights, replicating onto compute, and persisting on this compute for long periods. We create 20 novel task families consisting of 86 individual tasks. We benchmark 5 frontier models, and find they do not currently pose a credible threat of self-replication, but succeed on many components and are improving rapidly. Models can deploy instances from cloud compute providers, write self-propagating programs, and exfiltrate model weights under simple security setups, but struggle to pass KYC checks or set up robust and persistent agent deployments. Overall the best model we evaluated (Claude 3.7 Sonnet) has a >50% pass@10 score on 15/20 task families, and a >50% pass@10 score for 9/20 families on the hardest variants. These findings suggest autonomous replication capability could soon emerge with improvements in these remaining areas or with human assistance.

Michael Schmatz、Jay Bailey、Ollie Matthews、Ben Millwood、Alan Cooney、Alex Remedios、Oliver Sourbut、Sid Black、Asa Cooper Stickland、Jake Pencharz

计算技术、计算机技术自动化技术、自动化技术设备

Michael Schmatz,Jay Bailey,Ollie Matthews,Ben Millwood,Alan Cooney,Alex Remedios,Oliver Sourbut,Sid Black,Asa Cooper Stickland,Jake Pencharz.RepliBench: Evaluating the Autonomous Replication Capabilities of Language Model Agents[EB/OL].(2025-04-21)[2025-06-05].https://arxiv.org/abs/2504.18565.点此复制

评论