MOCHA: Are Code Language Models Robust Against Multi-Turn Malicious Coding Prompts?
MOCHA: Are Code Language Models Robust Against Multi-Turn Malicious Coding Prompts?
Recent advancements in Large Language Models (LLMs) have significantly enhanced their code generation capabilities. However, their robustness against adversarial misuse, particularly through multi-turn malicious coding prompts, remains underexplored. In this work, we introduce code decomposition attacks, where a malicious coding task is broken down into a series of seemingly benign subtasks across multiple conversational turns to evade safety filters. To facilitate systematic evaluation, we introduce \benchmarkname{}, a large-scale benchmark designed to evaluate the robustness of code LLMs against both single-turn and multi-turn malicious prompts. Empirical results across open- and closed-source models reveal persistent vulnerabilities, especially under multi-turn scenarios. Fine-tuning on MOCHA improves rejection rates while preserving coding ability, and importantly, enhances robustness on external adversarial datasets with up to 32.4% increase in rejection rates without any additional supervision.
Muntasir Wahed、Xiaona Zhou、Kiet A. Nguyen、Tianjiao Yu、Nirav Diwan、Gang Wang、Dilek Hakkani-Tür、Ismini Lourentzou
计算技术、计算机技术
Muntasir Wahed,Xiaona Zhou,Kiet A. Nguyen,Tianjiao Yu,Nirav Diwan,Gang Wang,Dilek Hakkani-Tür,Ismini Lourentzou.MOCHA: Are Code Language Models Robust Against Multi-Turn Malicious Coding Prompts?[EB/OL].(2025-07-25)[2025-08-10].https://arxiv.org/abs/2507.19598.点此复制
评论