Recall, Risk, and Governance in Automated Proposal Screening for Research Funding: Evidence from a National Funding Programme
Research funding agencies are increasingly exploring automated tools to support early-stage proposal screening. Recent advances in large language models (LLMs) have generated optimism regarding their use for text-based evaluation, yet their institutional suitability for high-stakes screening decisions remains underexplored. In particular, there is limited empirical evidence on how automated screening systems perform when evaluated against institutional error costs. This study compares two automated approaches for proposal screening against the priorities of a national funding call: A transparent, rule-based method using term frequency-inverse document frequency (TF-IDF) with domain-specific keyword engineering, and a semantic classification approach based on a large language model. Using selection committee decisions as ground truth for 959 proposals, we evaluate performance with particular attention to error structure. The results show that the TF-IDF-based approach outperforms the LLM-based system across standard metrics, achieving substantially higher recall (78.95% vs 45.82%) and producing far fewer false negatives (68 vs 175). The LLM-based system excludes more than half of the proposals ultimately selected by the committee. While false positives can be corrected through subsequent peer review, false negatives represent an irrecoverable exclusion from expert evaluation. By foregrounding error asymmetry and institutional context, this study demonstrates that the suitability of automated screening systems depends not on model sophistication alone, but on how their error profiles, transparency, and auditability align with research evaluation practice. These findings suggest that evaluation design and error tolerance should guide the use of AI-assisted screening tools in research funding more broadly.
💡 Research Summary
This paper investigates the institutional suitability of automated tools for early‑stage research‑proposal screening, focusing on the asymmetry of error costs. Using a dataset of 959 proposals submitted to an Indian national funding programme, the authors compare two approaches: a transparent, rule‑based TF‑IDF model with manually curated domain keywords, and a semantic classification system built on a large language model (LLM). Ground‑truth labels are taken from the decisions of the selection committee (323 selected, 636 rejected).
Both models are evaluated on accuracy, precision, recall, and F1 score, but the study emphasizes recall because false negatives (irreversible exclusion of potentially fundable proposals) are far more costly than false positives, which can be filtered out later in peer review. The TF‑IDF system achieves 80.92 % accuracy, 68.92 % precision, 78.95 % recall, and an F1 of 0.736, while the LLM attains 71.74 % accuracy, 60.66 % precision, 45.82 % recall, and an F1 of 0.522.
Confusion‑matrix analysis reveals that the TF‑IDF method produces 68 false negatives versus 175 for the LLM, meaning the LLM excludes more than half of the proposals that the expert committee ultimately deems fundable. Although the LLM generates slightly fewer false positives (96 vs. 115), these are less consequential because they can be corrected in later review stages.
The authors interpret these findings through Herbert Simon’s bounded rationality framework: decision‑makers operating under limited time, information, and cognitive capacity should prioritize minimizing catastrophic errors (false negatives) rather than fine‑grained discrimination. Consequently, a screening tool must offer high recall, transparency, and auditability. The rule‑based TF‑IDF approach satisfies these criteria because its keyword list and weighting scheme can be inspected, adjusted, and governed. In contrast, the LLM behaves as a black box; its conservative exclusion policy leads to a higher false‑negative rate, and its internal decision logic is difficult to trace.
From a governance perspective, the paper recommends that funding agencies define an “error‑cost matrix” before deploying AI‑assisted screening, explicitly stating acceptable levels of false positives and setting strict limits on false negatives. Ongoing monitoring, periodic re‑training or re‑engineering of keyword lists, and human‑in‑the‑loop oversight are essential to maintain legitimacy and accountability.
In conclusion, the study demonstrates that model sophistication alone does not determine suitability for research‑funding screening. Instead, alignment of error profiles with institutional risk tolerance, coupled with transparency and controllability, is paramount. Future work should explore hybrid ensembles, larger and more diverse datasets, and longitudinal assessments of policy impact to refine AI‑assisted screening in the research funding ecosystem.
Comments & Academic Discussion
Loading comments...
Leave a Comment