|国家预印本平台
首页|On the Ethics of Using LLMs for Offensive Security

On the Ethics of Using LLMs for Offensive Security

On the Ethics of Using LLMs for Offensive Security

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) have rapidly evolved over the past few years and are currently evaluated for their efficacy within the domain of offensive cyber-security. While initial forays showcase the potential of LLMs to enhance security research, they also raise critical ethical concerns regarding the dual-use of offensive security tooling. This paper analyzes a set of papers that leverage LLMs for offensive security, focusing on how ethical considerations are expressed and justified in their work. The goal is to assess the culture of AI in offensive security research regarding ethics communication, highlighting trends, best practices, and gaps in current discourse. We provide insights into how the academic community navigates the fine line between innovation and ethical responsibility. Particularly, our results show that 13 of 15 reviewed prototypes (86.6\%) mentioned ethical considerations and are thus aware of the potential dual-use of their research. Main motivation given for the research was allowing broader access to penetration-testing as well as preparing defenders for AI-guided attackers.

Andreas Happe、Jürgen Cito

安全科学计算技术、计算机技术

Andreas Happe,Jürgen Cito.On the Ethics of Using LLMs for Offensive Security[EB/OL].(2025-06-10)[2025-06-18].https://arxiv.org/abs/2506.08693.点此复制

评论