Bridging the Gap: Adapting Evidence to Decision Frameworks to support the link between Software Engineering academia and industry
Over twenty years ago, the Software Engineering (SE) research community have been involved with Evidence-Based Software Engineering (EBSE). EBSE aims to inform industrial practice with the best evidence from rigorous research, preferably from systematic literature reviews (SLRs). Since then, SE researchers have conducted many SLRs, perfected their SLR procedures, proposed alternative ways of presenting their results (such as Evidence Briefings), and profusely discussed how to conduct research that impacts practice. Nevertheless, there is still a feeling that SLRs’ results are not reaching practitioners. Something is missing. In this vision paper, we introduce Evidence to Decision (EtD) frameworks from the health sciences, which propose gathering experts in panels to assess the existing best evidence about the impact of an intervention in all relevant outcomes and make structured recommendations based on them. The insight we can leverage from EtD frameworks is not their structure per se but all the relevant criteria for making recommendations to practitioners from SLRs. Furthermore, we provide a worked example based on an SE SLR. We also discuss the challenges the SE research and practice community may face when adopting EtD frameworks, highlighting the need for more comprehensive criteria in our recommendations to industry practitioners.
💡 Research Summary
The paper addresses a persistent gap in Evidence‑Based Software Engineering (EBSE): while systematic literature reviews (SLRs) and the GRADE methodology have been adopted to assess the strength of research findings, the results rarely translate into actionable guidance for industry practitioners. The authors argue that the missing link is not the synthesis of evidence itself but a structured decision‑making process that converts evidence into concrete recommendations. To fill this gap, they propose importing the Evidence to Decision (EtD) framework from the health sciences into software engineering (SE).
EtD consists of two main phases. The first phase mirrors traditional SLR work: researchers identify, extract, and synthesize the best available evidence, then grade its certainty using GRADE (considering risk of bias, inconsistency, imprecision, indirectness, and publication bias). The second phase involves a multidisciplinary expert panel that uses the synthesized evidence together with a set of decision‑criteria to formulate a recommendation. The panel follows three steps: (1) formulate a precise question using the PICO format (Population, Intervention, Comparison, Outcome) and augment it with setting, perspective, sub‑groups, and conflict‑of‑interest disclosures; (2) assess the evidence against a checklist of criteria (e.g., problem priority, magnitude of desirable effects, certainty of evidence, balance of benefits and harms, resource requirements, cost‑effectiveness, stakeholder acceptability); and (3) draw a conclusion, stating the direction (for or against the intervention) and the strength of the recommendation (strong or discretionary).
To illustrate the approach, the authors present a worked example on pair programming versus solo programming. The question is framed as: “Should pair programming be adopted in software development projects?” with population = software developers, intervention = pair programming, comparison = solo programming, outcomes = development duration, effort, and product quality, and perspective = organization/team. Using the findings of an existing SLR (Hannay et al.), the panel judges that the evidence shows moderate benefits for quality but increased effort and longer duration, with a moderate certainty level. They also consider practical aspects such as the need for productivity measurement, the variability of effects for inexperienced developers, and a declared conflict of interest from a panel member affiliated with a training provider. The final recommendation is discretionary: organizations may adopt pair programming if they can tolerate higher effort and have mechanisms to monitor productivity; the recommendation is accompanied by implementation tips (e.g., pilot studies, measurement infrastructure) and research gaps (e.g., learning outcomes, onboarding effects).
The paper highlights several advantages of adopting EtD in SE: it forces a systematic translation of evidence into context‑specific guidance, makes the reasoning process transparent, and explicitly captures the strength of the recommendation. Moreover, by requiring a panel that includes both researchers and practitioners, EtD can bridge cultural and communication gaps between academia and industry.
However, the authors also discuss challenges specific to SE. Unlike health care, SE lacks standardized cost‑effectiveness models, and the stakeholder landscape is highly heterogeneous (companies, freelancers, open‑source communities). Therefore, the EtD criteria must be extended to cover software‑specific dimensions such as technical debt, licensing costs, and organizational culture. Conflict‑of‑interest management is also more complex because many researchers have industry ties. Additionally, the delivery of recommendations needs to be adapted; while health guidelines are often published in peer‑reviewed journals, SE may benefit from concise “Evidence Briefings” or interactive web portals that map directly onto the EtD structure.
In conclusion, the authors argue that Evidence to Decision frameworks provide a promising, structured pathway to turn rigorous SE research into actionable, trustworthy recommendations for practitioners. Successful adoption will require tailoring the framework’s criteria to SE realities, establishing clear panel governance, and developing practitioner‑friendly dissemination formats. By doing so, the long‑standing gap between SE academia and industry can be substantially narrowed.
Comments & Academic Discussion
Loading comments...
Leave a Comment