Adaptive Hierarchical Evaluation of LLMs and SAST tools for CWE Prediction in Python

Reading time: 1 minute
...

📝 Original Info

  • Title: Adaptive Hierarchical Evaluation of LLMs and SAST tools for CWE Prediction in Python
  • ArXiv ID: 2601.01320
  • Date: 2026-01-04
  • Authors: Muntasir Adnan, Carlos C. N. Kuhn

📝 Abstract

Large Language Models have become integral to software development, yet they frequently generate vulnerable code. Existing code vulnerability detection benchmarks employ binary classification, lacking the CWE-level specificity required for actionable feedback in iterative correction systems. We present ALPHA (Adaptive Learning via Penalty in Hierarchical Assessment), the first function-level Python benchmark that evaluates both LLMs and SAST tools using hierarchically aware, CWE-specific penalties. ALPHA distinguishes between overgeneralisation, over-specification, and lateral errors, reflecting practical differences in diagnostic utility. Evaluating seven LLMs and two SAST tools, we find LLMs substantially outperform SAST, though SAST demonstrates higher precision when detections occur. Critically, prediction consistency varies dramatically across models (8.26%-81.87% agreement), with significant implications for feedback-driven systems. We further outline a pathway for future work incorporating ALPHA penalties into supervised fine-tuning, which could provide principled hierarchyaware vulnerability detection pending empirical validation.

📄 Full Content

...(본문 내용이 길어 생략되었습니다. 사이트에서 전문을 확인해 주세요.)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut