|国家预印本平台
首页|Benchmarking Practices in LLM-driven Offensive Security: Testbeds, Metrics, and Experiment Design

Benchmarking Practices in LLM-driven Offensive Security: Testbeds, Metrics, and Experiment Design

Benchmarking Practices in LLM-driven Offensive Security: Testbeds, Metrics, and Experiment Design

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) have emerged as a powerful approach for driving offensive penetration-testing tooling. This paper analyzes the methodology and benchmarking practices used for evaluating Large Language Model (LLM)-driven attacks, focusing on offensive uses of LLMs in cybersecurity. We review 16 research papers detailing 15 prototypes and their respective testbeds. We detail our findings and provide actionable recommendations for future research, emphasizing the importance of extending existing testbeds, creating baselines, and including comprehensive metrics and qualitative analysis. We also note the distinction between security research and practice, suggesting that CTF-based challenges may not fully represent real-world penetration testing scenarios.

Andreas Happe、Jürgen Cito

安全科学

Andreas Happe,Jürgen Cito.Benchmarking Practices in LLM-driven Offensive Security: Testbeds, Metrics, and Experiment Design[EB/OL].(2025-04-14)[2025-04-27].https://arxiv.org/abs/2504.10112.点此复制

评论