The test estimation techniques studied can be categorized into the following five groups:
1. Judgment and rules of thumb.
2. Analogy and work breakdown.
3. Factors and weights.
4. Size based estimation models.
5. Fuzzy, Neural and Case based models.
Approaches towards estimation adopted by these techniques can be broadly classified as formula oriented and model oriented while some of the techniques combine both. Several of these techniques identify variables relating to the project and come out with a formula to provide an estimate. They incorporate various heuristics based on the experience of the person proposing the technique. There are no established criteria to evaluate such formulae. Other techniques use those variables to build a statistical model for estimation based on the relationship between the independent variables and the dependent variable. An a-posteriori estimation model representing testing process is built with data from completed projects. The models are subsequently used to estimate new projects. These models can be evaluated using recognized criteria .
Model Evaluation Criteria
Estimation models are built using past data for prediction in future, and so they are to be evaluated for fitness for the purpose. The criteria used for evaluating the estimation models (Conte, 1986) are:
a. Coefficient of determination (R2)
b. Adjusted R2 (Adj R2)
c. Mean Magnitude of Relative Error (MMRE)
d. Median Magnitude of Relative Error (MedMRE)
Criteria for Evaluation of Test Estimation Techniques
In order to evaluate the test estimation techniques identified in the literature study, I propose the following criteria:
Customer view of requirements: This criterion makes it possible to determine whether the estimation technique looks at the software requirements from a customer viewpoint or from the technical/implementation viewpoint. Estimation based on the customer viewpoint provides an opportunity for customer to directly relate estimates to the requirements.
Functional size as a prerequisite to estimation: Most estimation methods use some form of size, which is either implicit or explicit in effort estimation. When size is not explicit, benchmarking and performance studies across projects and organizations are not possible. Functional size can be measured using either international standards or locally defined sizing techniques.
Mathematical validity: Several of the estimation techniques have evolved over the years, mostly based on a ‘feel good’ approach and ignoring the validity of their mathematical foundations. This criterion looks at the metrological foundation of the proposed estimation techniques and application of statistical criteria to assess the quality of the estimation models. A valid mathematical foundation provides a sound basis for further improvements.
Verifiability: The estimate produced must be verifiable by a person other than the estimator. Verifiability makes the estimate more dependable.
Benchmarking: It is essential that estimates be comparable across organizations, as this can help later in benchmarking and verifying performance improvement. The genesis of the estimation techniques is looked at to determine whether or not benchmarking is feasible.
INTRODUCTION |