|国家预印本平台
首页|A Red Teaming Roadmap Towards System-Level Safety

A Red Teaming Roadmap Towards System-Level Safety

A Red Teaming Roadmap Towards System-Level Safety

来源:Arxiv_logoArxiv
英文摘要

Large Language Model (LLM) safeguards, which implement request refusals, have become a widely adopted mitigation strategy against misuse. At the intersection of adversarial machine learning and AI safety, safeguard red teaming has effectively identified critical vulnerabilities in state-of-the-art refusal-trained LLMs. However, in our view the many conference submissions on LLM red teaming do not, in aggregate, prioritize the right research problems. First, testing against clear product safety specifications should take a higher priority than abstract social biases or ethical principles. Second, red teaming should prioritize realistic threat models that represent the expanding risk landscape and what real attackers might do. Finally, we contend that system-level safety is a necessary step to move red teaming research forward, as AI models present new threats as well as affordances for threat mitigation (e.g., detection and banning of malicious users) once placed in a deployment context. Adopting these priorities will be necessary in order for red teaming research to adequately address the slate of new threats that rapid AI advances present today and will present in the very near future.

Zifan Wang、Christina Q. Knight、Jeremy Kritz、Willow E. Primack、Julian Michael

安全科学自动化技术、自动化技术设备

Zifan Wang,Christina Q. Knight,Jeremy Kritz,Willow E. Primack,Julian Michael.A Red Teaming Roadmap Towards System-Level Safety[EB/OL].(2025-05-30)[2025-07-01].https://arxiv.org/abs/2506.05376.点此复制

评论