An Empirical Comparative Study of Checklist based and Ad Hoc Code Reading Techniques in a Distributed Groupware Environment
📝 Abstract
Software inspection is a necessary and important tool for software quality assurance. Since it was introduced by Fagan at IBM in 1976, arguments exist as to which method should be adopted to carry out the exercise, whether it should be paper based or tool based, and what reading technique should be used on the inspection document. Extensive works have been done to determine the effectiveness of reviewers in paper based environment when using ad hoc and checklist reading techniques. In this work, we take the software inspection research further by examining whether there is going to be any significant difference in defect detection effectiveness of reviewers when they use either ad hoc or checklist reading techniques in a distributed groupware environment. Twenty final year undergraduate students of computer science, divided into ad hoc and checklist reviewers groups of ten members each were employed to inspect a medium sized java code synchronously on groupware deployed on the Internet. The data obtained were subjected to tests of hypotheses using independent T test and correlation coefficients. Results from the study indicate that there are no significant differences in the defect detection effectiveness, effort in terms of time taken in minutes and false positives reported by the reviewers using either ad hoc or checklist based reading techniques in the distributed groupware environment studied.
💡 Analysis
Software inspection is a necessary and important tool for software quality assurance. Since it was introduced by Fagan at IBM in 1976, arguments exist as to which method should be adopted to carry out the exercise, whether it should be paper based or tool based, and what reading technique should be used on the inspection document. Extensive works have been done to determine the effectiveness of reviewers in paper based environment when using ad hoc and checklist reading techniques. In this work, we take the software inspection research further by examining whether there is going to be any significant difference in defect detection effectiveness of reviewers when they use either ad hoc or checklist reading techniques in a distributed groupware environment. Twenty final year undergraduate students of computer science, divided into ad hoc and checklist reviewers groups of ten members each were employed to inspect a medium sized java code synchronously on groupware deployed on the Internet. The data obtained were subjected to tests of hypotheses using independent T test and correlation coefficients. Results from the study indicate that there are no significant differences in the defect detection effectiveness, effort in terms of time taken in minutes and false positives reported by the reviewers using either ad hoc or checklist based reading techniques in the distributed groupware environment studied.
📄 Content
An Empirical Comparative Study of Checklist- based and Ad Hoc Code Reading Techniques in a Distributed Groupware Environment
Olalekan S. Akinola
Adenike O. Osofisan Department of Computer Science,
Department of Computer Science, University of Ibadan, Nigeria.
University of Ibadan, Nigeria Solom202@yahoo.co.uk
Abstract
Software inspection is a necessary and important tool for software quality assurance. Since it was introduced by Fagan at IBM in 1976, arguments exist as to which method should be adopted to carry out the exercise, whether it should be paper-based or tool- based, and what reading technique should be used on the inspection document. Extensive works have been done to determine the effectiveness of reviewers in paper-based environment when using ad hoc and checklist reading techniques. In this work, we take the software inspection research further by examining whether there is going to be any significant difference in defect detection effectiveness of reviewers when they use either ad hoc or checklist reading techniques in a distributed groupware environment. Twenty final year undergraduate students of computer science, divided into ad hoc and checklist reviewers groups of ten members each were employed to inspect a medium- sized java code synchronously on groupware deployed on the Internet. The data obtained were subjected to tests of hypotheses using independent t-test and correlation coefficients. Results from the study indicate that there are no significant differences in the defect detection effectiveness, effort in terms of time taken in minutes and false positives reported by the reviewers using either ad hoc or checklist based reading techniques in the distributed groupware environment studied.
Key words: Software Inspection; Ad hoc; Checklist; groupware.
I. INTRODUCTION A software could be judged to be of high or low quality depending on who is analyzing it. Thus, quality software can be said to be a “software that satisfies the needs of the users and the programmers involved in it”, [28]. Pfleeger highlighted four major criteria for judging the quality of a software: (i) It does what the user expects it to do; (ii) Its interaction with the computer resources is satisfactory; (iii) The user finds it easy to learn and to use; and (iv) The developers find it convenient in terms of design, coding, testing and maintenance.
In order to achieve the above criteria, software
inspection was introduced. Software inspection has become
widely used [36] since it was first introduced by Fagan [25]
at IBM. This is due to its potential benefits for software
development, the increased demand for quality certification
in
software,
(for
example,
ISO
9000
compliance
requirements), and the adoption of the Capability Maturity
Model as a development methodology [27].
Software inspection is a necessary and important
tool for software quality assurance. It involves strict and
close examinations carried out on development products to
detect defects, violations of development standards and
other problems [18]. The development products could be
specifications, source code, contracts, test plans and test
cases [33, 4, 8].
Traditionally, the software inspection artifact
(requirements, designs, or codes) is normally presented on
papers for the inspectors / reviewers. The advent of
Collaborative Software Development (CSD) provides
opportunities for software developers in geographically
dispersed locations to communicate, and further build and
share common knowledge repositories [13]. Through CSD,
distributed collaborative software inspection methodologies
emerge in which group of reviewers in different
geographical locations may log on synchronously or
asynchronously online to inspect an inspection artifact.
It has been hypothesized that in order to gain
credibility and validity, software inspection experiments
have to be conducted in different environments, using
different people, languages, cultures, documents, and so on
[10, 12]. That is, they must be redone in some other
environments. The motivation for this work therefore stems
from this hypothesis.
Specifically, the following are the target goals of
this research work, to determine if there is any significant
difference in the effectiveness of reviewers using ad hoc
code reading technique and those using Checklist reading
technique in a distributed tool-based environment.
Twenty final year students of Computer Science
were employed to carry out inspection task on a medium-
sized code in a distributed, collaborative environment. The
students were grouped into two groups. One group used the
Ad hoc code reading technique while the second group used
the checklist-based code reading technique (CBR). Briefly,
results obtained show that there is no significant difference
(IJCSIS) Internatio
This content is AI-processed based on ArXiv data.