|国家预印本平台
首页|Improving LLM Reasoning for Vulnerability Detection via Group Relative Policy Optimization

Improving LLM Reasoning for Vulnerability Detection via Group Relative Policy Optimization

Improving LLM Reasoning for Vulnerability Detection via Group Relative Policy Optimization

来源:Arxiv_logoArxiv
英文摘要

Improving and understanding the training dynamics and reasoning of Large Language Models (LLMs) has become essential for their deployment in AI-based security tools, such as software vulnerability detection. In this work, we present an extensive study aimed at advancing recent RL-based finetuning techniques for LLMs in the context of vulnerability detection. We start by highlighting key limitations of commonly adopted LLMs, such as their tendency to over-predict certain types of vulnerabilities while failing to detect others. To address this challenge, we explore the use of Group Relative Policy Optimization (GRPO), a recent policy-gradient method, for guiding LLM behavior through structured, rule-based rewards. We enable its application to the vulnerability detection task by redefining its advantage functions and reward signals using annotations from widely used datasets in the field, including BigVul, DiverseVul, and CleanVul. The proposed methodology enables an extensive set of experiments, addressing multiple research questions regarding the impact of GRPO on generalization, reasoning capabilities, and performance improvements over standard supervised finetuning (SFT). Our findings offer valuable insights into the potential of RL-based training to enhance both the performance and reasoning abilities of LLMs in the context of software vulnerability detection.

Marco Simoni、Aleksandar Fontana、Giulio Rossolini、Andrea Saracino

计算技术、计算机技术

Marco Simoni,Aleksandar Fontana,Giulio Rossolini,Andrea Saracino.Improving LLM Reasoning for Vulnerability Detection via Group Relative Policy Optimization[EB/OL].(2025-07-03)[2025-07-16].https://arxiv.org/abs/2507.03051.点此复制

评论