SeBERTis: A Framework for Producing Classifiers of Security-Related Issue Reports

SeBERTis: A Framework for Producing Classifiers of Security-Related Issue Reports
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Monitoring issue tracker submissions is a crucial software maintenance activity. A key goal is the prioritization of high risk, security-related bugs. If such bugs can be recognized early, the risk of propagation to dependent products and endangerment of stakeholder benefits can be mitigated. To assist triage engineers with this task, several automatic detection techniques, from Machine Learning (ML) models to prompting Large Language Models (LLMs), have been proposed. Although promising to some extent, prior techniques often memorize lexical cues as decision shortcuts, yielding low detection rate specifically for more complex submissions. As such, these classifiers do not yet reach the practical expectations of a real-time detector of security-related issues. To address these limitations, we propose SEBERTIS, a framework to train Deep Neural Networks (DNNs) as classifiers independent of lexical cues, so that they can confidently detect fully unseen security-related issues. SEBERTIS capitalizes on fine-tuning bidirectional transformer architectures as Masked Language Models (MLMs) on a series of semantically equivalent vocabulary to prediction labels (which we call Semantic Surrogates) when they have been replaced with a mask. Our SEBERTIS-trained classifier achieves a 0.9880 F1-score in detecting security-related issues of a curated corpus of 10,000 GitHub issue reports, substantially outperforming state-of-the-art issue classifiers, with 14.44%-96.98%, 15.40%-93.07%, and 14.90%-94.72% higher detection precision, recall, and F1-score over ML-based baselines. Our classifier also substantially surpasses LLM baselines, with an improvement of 23.20%-63.71%, 36.68%-85.63%, and 39.49%-74.53% for precision, recall, and F1-score.


💡 Research Summary

The paper addresses the critical need for early detection of security‑related issue reports in software maintenance. Existing approaches—ranging from traditional machine‑learning classifiers that rely on structured fields or TF‑IDF vectors to recent large language models (LLMs) prompted with few‑shot examples—suffer from a common weakness: they over‑fit to lexical cues (e.g., the presence of words like “security”, “vulnerability”, etc.). Consequently, they miss complex or implicitly described security bugs, limiting their usefulness for real‑time triage.

To overcome this, the authors propose SEBERTIS, a framework that trains a bidirectional transformer (BERT) as a masked language model (MLM) using “Semantic Surrogates”. Semantic Surrogates are a curated list of keywords that can semantically replace the class labels “security‑related” and “non‑security‑related”. The construction of this list involves: (1) collecting a balanced corpus of 10 000 GitHub issues (security‑tagged and non‑security‑tagged) from well‑maintained repositories; (2) expanding an initial seed set of security tags (“security”, “vulnerability”, “CVE”, “CWE”, etc.) with synonyms from WordNet and semantically similar terms from Word2Vec; (3) manual vetting to retain only terms that unambiguously convey security relevance. The final surrogate set contains roughly a dozen tokens such as “exposure”, “risk”, “secure”, and “vulnerable”.

During training, every occurrence of a surrogate in an issue description is replaced with the


Comments & Academic Discussion

Loading comments...

Leave a Comment