|国家预印本平台
首页|Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models

Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models

Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models

来源:Arxiv_logoArxiv
英文摘要

Language Model (LM) agents for cybersecurity that are capable of autonomously identifying vulnerabilities and executing exploits have potential to cause real-world impact. Policymakers, model providers, and researchers in the AI and cybersecurity communities are interested in quantifying the capabilities of such agents to help mitigate cyberrisk and investigate opportunities for penetration testing. Toward that end, we introduce Cybench, a framework for specifying cybersecurity tasks and evaluating agents on those tasks. We include 40 professional-level Capture the Flag (CTF) tasks from 4 distinct CTF competitions, chosen to be recent, meaningful, and spanning a wide range of difficulties. Each task includes its own description, starter files, and is initialized in an environment where an agent can execute commands and observe outputs. Since many tasks are beyond the capabilities of existing LM agents, we introduce subtasks for each task, which break down a task into intermediary steps for a more detailed evaluation. To evaluate agent capabilities, we construct a cybersecurity agent and evaluate 8 models: GPT-4o, OpenAI o1-preview, Claude 3 Opus, Claude 3.5 Sonnet, Mixtral 8x22b Instruct, Gemini 1.5 Pro, Llama 3 70B Chat, and Llama 3.1 405B Instruct. For the top performing models (GPT-4o and Claude 3.5 Sonnet), we further investigate performance across 4 agent scaffolds (structed bash, action-only, pseudoterminal, and web search). Without subtask guidance, agents leveraging Claude 3.5 Sonnet, GPT-4o, OpenAI o1-preview, and Claude 3 Opus successfully solved complete tasks that took human teams up to 11 minutes to solve. In comparison, the most difficult task took human teams 24 hours and 54 minutes to solve. All code and data are publicly available at https://cybench.github.io.

Daniel E. Ho、Andy K. Zhang、Neil Perry、Riya Dulepet、Joey Ji、Celeste Menders、Justin W. Lin、Eliot Jones、Gashon Hussein、Samantha Liu、Donovan Jasper、Pura Peetathawatchai、Ari Glenn、Vikram Sivashankar、Daniel Zamoshchin、Leo Glikbarg、Derek Askaryar、Mike Yang、Teddy Zhang、Rishi Alluri、Nathan Tran、Rinnara Sangpisit、Polycarpos Yiorkadjis、Kenny Osele、Gautham Raghupathi、Dan Boneh、Percy Liang

计算技术、计算机技术安全科学

Daniel E. Ho,Andy K. Zhang,Neil Perry,Riya Dulepet,Joey Ji,Celeste Menders,Justin W. Lin,Eliot Jones,Gashon Hussein,Samantha Liu,Donovan Jasper,Pura Peetathawatchai,Ari Glenn,Vikram Sivashankar,Daniel Zamoshchin,Leo Glikbarg,Derek Askaryar,Mike Yang,Teddy Zhang,Rishi Alluri,Nathan Tran,Rinnara Sangpisit,Polycarpos Yiorkadjis,Kenny Osele,Gautham Raghupathi,Dan Boneh,Percy Liang.Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models[EB/OL].(2024-08-15)[2025-06-12].https://arxiv.org/abs/2408.08926.点此复制

评论