|国家预印本平台
首页|Secret Breach Detection in Source Code with Large Language Models

Secret Breach Detection in Source Code with Large Language Models

Secret Breach Detection in Source Code with Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Background: Leaking sensitive information, such as API keys, tokens, and credentials, in source code remains a persistent security threat. Traditional regex and entropy-based tools often generate high false positives due to limited contextual understanding. Aims: This work aims to enhance secret detection in source code using large language models (LLMs), reducing false positives while maintaining high recall. We also evaluate the feasibility of using fine-tuned, smaller models for local deployment. Method: We propose a hybrid approach combining regex-based candidate extraction with LLM-based classification. We evaluate pre-trained and fine-tuned variants of various Large Language Models on a benchmark dataset from 818 GitHub repositories. Various prompting strategies and efficient fine-tuning methods are employed for both binary and multiclass classification. Results: The fine-tuned LLaMA-3.1 8B model achieved an F1-score of 0.9852 in binary classification, outperforming regex-only baselines. For multiclass classification, Mistral-7B reached 0.982 accuracy. Fine-tuning significantly improved performance across all models. Conclusions: Fine-tuned LLMs offer an effective and scalable solution for secret detection, greatly reducing false positives. Open-source models provide a practical alternative to commercial APIs, enabling secure and cost-efficient deployment in development workflows.

Md Nafiu Rahman、Sadif Ahmed、Zahin Wahab、S M Sohan、Rifat Shahriyar

计算技术、计算机技术

Md Nafiu Rahman,Sadif Ahmed,Zahin Wahab,S M Sohan,Rifat Shahriyar.Secret Breach Detection in Source Code with Large Language Models[EB/OL].(2025-04-25)[2025-05-28].https://arxiv.org/abs/2504.18784.点此复制

评论