Retrieval-Augmented Few-Shot Prompting Versus Fine-Tuning for Code Vulnerability Detection

Few-shot prompting has emerged as a practical alternative to fine-tuning for leveraging the capabilities of large language models (LLMs) in specialized tasks. However, its effectiveness depends heavil

Retrieval-Augmented Few-Shot Prompting Versus Fine-Tuning for Code Vulnerability Detection

Few-shot prompting has emerged as a practical alternative to fine-tuning for leveraging the capabilities of large language models (LLMs) in specialized tasks. However, its effectiveness depends heavily on the selection and quality of in-context examples, particularly in complex domains. In this work, we examine retrieval-augmented prompting as a strategy to improve few-shot performance in code vulnerability detection, where the goal is to identify one or more security-relevant weaknesses present in a given code snippet from a predefined set of vulnerability categories. We perform a systematic evaluation using the Gemini-1.5-Flash model across three approaches: (1) standard few-shot prompting with randomly selected examples, (2) retrieval-augmented prompting using semantically similar examples, and (3) retrieval-based labeling, which assigns labels based on retrieved examples without model inference. Our results show that retrieval-augmented prompting consistently outperforms the other prompting strategies. At 20 shots, it achieves an F1 score of 74.05% and a partial match accuracy of 83.90%. We further compare this approach against zero-shot prompting and several fine-tuned models, including Gemini-1.5-Flash and smaller open-source models such as DistilBERT, DistilGPT2, and CodeBERT. Retrieval-augmented prompting outperforms both zero-shot (F1 score: 36.35%, partial match accuracy: 20.30%) and fine-tuned Gemini (F1 score: 59.31%, partial match accuracy: 53.10%), while avoiding the training time and cost associated with model fine-tuning. On the other hand, fine-tuning CodeBERT yields higher performance (F1 score: 91.22%, partial match accuracy: 91.30%) but requires additional training, maintenance effort, and resources.


💡 Research Summary

This paper compares the effectiveness of few-shot prompting and fine-tuning for leveraging large language models (LLMs) in code vulnerability detection. The study introduces retrieval-augmented prompting as a strategy to enhance few-shot performance, particularly focusing on identifying security-relevant weaknesses within given code snippets from predefined categories. Using the Gemini-1.5-Flash model, three approaches were evaluated: standard few-shot prompting with randomly selected examples, retrieval-augmented prompting using semantically similar examples, and retrieval-based labeling without model inference. The results show that retrieval-augmented prompting outperforms other strategies consistently. At 20 shots, it achieves an F1 score of 74.05% and a partial match accuracy of 83.90%. Comparisons were also made against zero-shot prompting (F1 score: 36.35%, partial match accuracy: 20.30%) and several fine-tuned models, including Gemini-1.5-Flash (F1 score: 59.31%, partial match accuracy: 53.10%), DistilBERT, DistilGPT2, and CodeBERT. Retrieval-augmented prompting outperformed both zero-shot and fine-tuned Gemini while avoiding the associated training time and cost. However, fine-tuning CodeBERT yielded higher performance (F1 score: 91.22%, partial match accuracy: 91.30%) but required additional training effort and resources. The study validates that retrieval-augmented prompting provides superior performance compared to standard few-shot prompting and fine-tuning methods in code vulnerability detection, especially under data-limited conditions.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...