|国家预印本平台
首页|The Automation Advantage in AI Red Teaming

The Automation Advantage in AI Red Teaming

The Automation Advantage in AI Red Teaming

来源:Arxiv_logoArxiv
英文摘要

This paper analyzes Large Language Model (LLM) security vulnerabilities based on data from Crucible, encompassing 214,271 attack attempts by 1,674 users across 30 LLM challenges. Our findings reveal automated approaches significantly outperform manual techniques (69.5% vs 47.6% success rate), despite only 5.2% of users employing automation. We demonstrate that automated approaches excel in systematic exploration and pattern matching challenges, while manual approaches retain speed advantages in certain creative reasoning scenarios, often solving problems 5x faster when successful. Challenge categories requiring systematic exploration are most effectively targeted through automation, while intuitive challenges sometimes favor manual techniques for time-to-solve metrics. These results illuminate how algorithmic testing is transforming AI red-teaming practices, with implications for both offensive security research and defensive measures. Our analysis suggests optimal security testing combines human creativity for strategy development with programmatic execution for thorough exploration.

Rob Mulla、Ads Dawson、Vincent Abruzzon、Brian Greunke、Nick Landers、Brad Palm、Will Pearce

自动化基础理论自动化技术、自动化技术设备计算技术、计算机技术

Rob Mulla,Ads Dawson,Vincent Abruzzon,Brian Greunke,Nick Landers,Brad Palm,Will Pearce.The Automation Advantage in AI Red Teaming[EB/OL].(2025-04-28)[2025-06-06].https://arxiv.org/abs/2504.19855.点此复制

评论