From Rogue to Safe AI: The Role of Explicit Refusals in Aligning LLMs with International Humanitarian Law
From Rogue to Safe AI: The Role of Explicit Refusals in Aligning LLMs with International Humanitarian Law
Large Language Models (LLMs) are widely used across sectors, yet their alignment with International Humanitarian Law (IHL) is not well understood. This study evaluates eight leading LLMs on their ability to refuse prompts that explicitly violate these legal frameworks, focusing also on helpfulness - how clearly and constructively refusals are communicated. While most models rejected unlawful requests, the clarity and consistency of their responses varied. By revealing the model's rationale and referencing relevant legal or safety principles, explanatory refusals clarify the system's boundaries, reduce ambiguity, and help prevent misuse. A standardised system-level safety prompt significantly improved the quality of the explanations expressed within refusals in most models, highlighting the effectiveness of lightweight interventions. However, more complex prompts involving technical language or requests for code revealed ongoing vulnerabilities. These findings contribute to the development of safer, more transparent AI systems and propose a benchmark to evaluate the compliance of LLM with IHL.
John Mavi、Diana Teodora G?itan、Sergio Coronado
安全科学
John Mavi,Diana Teodora G?itan,Sergio Coronado.From Rogue to Safe AI: The Role of Explicit Refusals in Aligning LLMs with International Humanitarian Law[EB/OL].(2025-06-05)[2025-07-19].https://arxiv.org/abs/2506.06391.点此复制
评论