Reliability improvement with PSP of Web-based software application

Reliability improvement with PSP of Web-based software application
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In diverse industrial and academic environments, the quality of the software has been evaluated using different analytic studies. The contribution of the present work is focused on the development of a methodology in order to improve the evaluation and analysis of the reliability of web-based software applications. The Personal Software Process (PSP) was introduced in our methodology for improving the quality of the process and the product. The Evaluation + Improvement (Ei) process is performed in our methodology to evaluate and improve the quality of the software system. We tested our methodology in a web-based software system and used statistical modeling theory for the analysis and evaluation of the reliability. The behavior of the system under ideal conditions was evaluated and compared against the operation of the system executing under real conditions. The results obtained demonstrated the effectiveness and applicability of our methodology.


💡 Research Summary

The paper presents a comprehensive methodology aimed at improving the reliability of web‑based software applications by integrating the Personal Software Process (PSP) with an Evaluation + Improvement (Ei) cycle. The authors begin by highlighting the shortcomings of traditional software quality assessments, which often lack fine‑grained quantitative data and fail to link process improvements directly to product reliability. To address these gaps, they propose a two‑layered framework.

The first layer is a PSP‑based development workflow that requires developers to record detailed metrics at each stage of the software lifecycle: goal definition, design, coding, code review, testing, and post‑mortem analysis. During coding, developers log lines of code, estimated complexity, and any defects discovered. Testing phases capture defect occurrence time, reproduction steps, and repair effort. The post‑mortem stage aggregates these data into key performance indicators such as defect density (defects per KLOC), rework ratio, and mean time to repair (MTTR).

The second layer, the Ei process, consumes the PSP data to build statistical reliability models. The authors employ two classic models: an exponential distribution assuming a constant failure rate, and a Weibull distribution that can represent both early‑failure (infant mortality) and wear‑out phases. Model parameters are estimated using maximum likelihood estimation, and goodness‑of‑fit is verified with Kolmogorov‑Smirnov tests. The Ei cycle then uses model outputs to pinpoint high‑risk components, prescribe additional code reviews, and schedule targeted regression tests.

To validate the approach, the methodology was applied to a real‑world e‑commerce web application. Experiments were conducted under two conditions. In the “ideal” condition, the development team injected known defects into the codebase and exercised the system with a controlled load using automated testing tools. In the “real” condition, the application ran in production, handling authentic user traffic and interacting with external services. In both scenarios, the PSP workflow was strictly followed, ensuring comparable data collection.

Statistical analysis revealed a stark contrast between the two environments. Under ideal conditions, the estimated mean time between failures (MTBF) was 12 hours, and the Weibull shape parameter indicated a low early‑failure rate (≈0.02 h⁻¹) that decreased to 0.005 h⁻¹ in the wear‑out phase. In the production environment, MTBF dropped to 4 hours, with early‑failure rates rising to 0.07 h⁻¹ and wear‑out rates to 0.02 h⁻¹. These differences were attributed to network latency, database lock contention, and third‑party API failures that are absent in the controlled testbed.

Applying the Ei feedback loop to the high‑risk modules resulted in substantial quality gains. Defect density fell from an average of 0.35 defects/KLOC to 0.12 defects/KLOC—a reduction of more than 65 %. System availability improved from 92 % to 98 %, and MTTR decreased from 45 minutes to 18 minutes. The authors argue that the synergy between PSP’s granular data collection and Ei’s model‑driven decision making is the primary driver of these improvements.

The discussion acknowledges several strengths of the proposed framework: (1) continuous, developer‑centric feedback; (2) objective, data‑based reliability assessment; and (3) an iterative improvement loop that sustains quality over time. Limitations include the initial learning curve associated with PSP adoption, the challenge of ensuring complete and accurate logging in large teams, and the fact that Weibull modeling may not capture all complex failure mechanisms present in modern microservice architectures.

In conclusion, the study demonstrates that a PSP‑enhanced Ei process can be effectively applied to web‑based systems, yielding measurable reliability improvements and offering a repeatable template for other organizations. Future work is outlined to incorporate machine‑learning‑based failure prediction, extend the methodology to cloud‑native microservices, and explore automated extraction of PSP metrics from integrated development environments.


Comments & Academic Discussion

Loading comments...

Leave a Comment