Evaluation of Test Estimation Techniques

The test estimation techniques studied can be categorized into the following five groups:
1. Judgment and rules of thumb.
2. Analogy and work breakdown.
3. Factors and weights.
4. Size based estimation models.
5. Fuzzy, Neural and Case based models.

Approaches towards estimation adopted by these techniques can be broadly classified as formula oriented and model oriented while some of the techniques combine both. Several of these techniques identify variables relating to the project and come out with a formula to provide an estimate. They incorporate various heuristics based on the experience of the person proposing the technique. There are no established criteria to evaluate such formulae. Other techniques use those variables to build a statistical model for estimation based on the relationship between the independent variables and the dependent variable. An a-posteriori estimation model representing testing process is built with data from completed projects. The models are subsequently used to estimate new projects. These models can be evaluated using recognized criteria .

Model Evaluation Criteria

Estimation models are built using past data for prediction in future, and so they are to be evaluated for fitness for the purpose. The criteria used for evaluating the estimation models (Conte, 1986) are:
a. Coefficient of determination (R2)
b. Adjusted R2 (Adj R2)
c. Mean Magnitude of Relative Error (MMRE)
d. Median Magnitude of Relative Error (MedMRE)

Criteria for Evaluation of Test Estimation Techniques 

In order to evaluate the test estimation techniques identified in the literature study, I propose the following criteria:

Customer view of requirements: This criterion makes it possible to determine whether the estimation technique looks at the software requirements from a customer viewpoint or from the technical/implementation viewpoint. Estimation based on the customer viewpoint provides an opportunity for customer to directly relate estimates to the requirements.

Functional size as a prerequisite to estimation: Most estimation methods use some form of size, which is either implicit or explicit in effort estimation. When size is not explicit, benchmarking and performance studies across projects and organizations are not possible. Functional size can be measured using either international standards or locally defined sizing techniques.

Mathematical validity: Several of the estimation techniques  have evolved over the years, mostly based on a ‘feel good’ approach and ignoring the validity of their mathematical foundations. This criterion looks at the metrological foundation of the proposed estimation techniques and application of statistical criteria to assess the quality of the estimation models. A valid mathematical foundation provides a sound basis for further improvements.

Verifiability: The estimate produced must be verifiable by a person other than the estimator. Verifiability makes the estimate more dependable.

Benchmarking: It is essential that estimates be comparable across organizations, as this can help later in benchmarking and verifying performance improvement. The genesis of the estimation techniques is looked at to determine whether or not benchmarking is feasible.

Table des matières

INTRODUCTION
0.1 Context of Software Testing and Estimation
0.2 Processes and Test Types
0.3 Early Perspectives on Testing and Effort Estimation
0.4 Implications for the Industry & Economy
0.4.1 Impacts of Testing and Software Defects
0.4.2 Potential of a Better Estimation Model for testing
0.5 Motivation for the Research
0.6 Organiztion of the Thesis
CHAPTER 1 RESEARCH GOAL, OBJECTIVES AND METHDOLOGY
1.1 Research Goal
1.2 Research Objectives
1.3 Research Approach
1.3.1 Research Disciplines
1.3.1.1 Software Testing
1.3.1.2 Software Engineering
1.3.1.3 Metrology
1.3.1.4 Statistics
1.3.2 Research Methodology
1.3.2.1 Literature Study
1.3.2.2 Designing an Unified Framework for Test Estimation
1.3.2.3 Designing Estimation Models for Functional Testing
1.3.2.4 Evaluating the Estimation Models
1.3.2.5 Developing a Prototype Estimation Tool
CHAPTER 2 LITERATURE STUDY
2.1 Evaluation of Test Estimation Techniques
2.1.1 Categories of Techniques
2.1.2 Model Evaluation Criteria
2.1.3 Criteria for Evaluation of Test Estimation Techniques
2.2 Test Estimation Techniques
2.2.1 Judgement and Rule of Thumb
2.2.2 Analogy and Work Breakdown Techniques
2.2.3 Factors and Weights
2.2.4 Size-based Estimation Models
2.2.4.1 Test Size-based Estimation
2.2.4.2 AssessQ Model
2.2.4.3 Estimating test volume and effort
2.2.5 Neural Network and Fuzzy Models
2.2.5.1 Artificial Neural Network (ANN) Estimation Model
2.2.5.2 Fuzzy Logic Test Estimation Model
2.3 Other Literature of Interest on Test Estimation
2.3.1 Functional Testing
2.3.2 Non-functional Testing
2.3.3 Fuzzy Logic Estimation
2.3.4 Model-driven Testing
2.3.5 Agile Testing
2.3.6 Service Oriented Architecture and Cloud based Technologies
2.3.7 Automated testing
2.4 Summary
CHAPTER 3 CONCLUSION

Cours gratuitTélécharger le document complet

 

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *